var/home/core/zuul-output/0000755000175000017500000000000015134723127014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134737075015505 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000340305515134736734020275 0ustar corecoreܽsikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ~Fl"mv?_eGbuuțx{w7ݭ7֫t~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@e˛H,љ:72 2ƴ40* P}(*a-ٝN2,PO޺E$ޠYx3JtQJFjc 9G8MOY:GTMce0hTYF;B6@ c$Ⱦ֠N+fD>%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYDtEYZb4-UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_YҊŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()¾5m'p/\խX\=z,Mw˭x:qu礛WԓL!I xӤ1(5AKRVF2ɌУլ F "vuhc=JS\kkZAY`R"Hr1]%oR[^oI]${&L8<=#0yaKL: JJl r;t#H+B|ɧJiM cm)>H=l}.^\ݧM<lu Y> XH\z:dHElL(uHR0i#q%]!=t_쾋-, vW~* ^g/5n]FhNU˿oۂ6C9C7sn,kje*;iΓA7,Q)-,=1A sK|ۜLɽy]ʸEO<-YEqKzϢ \{>dDLF amKGm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvdߋ{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%|5u=kkN2{'FEc* A>{avdt)8|mg定TN7,TEXt+`F P |ɧ<Ғ8_iqE b}$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::;Ⱦ7,VW.JE:PgXoΰUv:ΰdɆΰ (ΰ0eTUgXun[g, ׽-t!X򴱞_aM:E.Qg1DllЊE҉L ehJx{̗Uɾ?si&2"C]u$.`mjmƒVe9f6NŐsLu6fe wkىKR%f"6=rw^)'Hz }x>1yFX09'A%bDb0!i(`Z;TyֻΗ|ִ0-6dAC5t[OM91c:VJR9&ksvJ;0ɝ$krogB= FYtЩOte=?>T&O{Ll)HClba1PIFĀ":tu^}.&R*!^pHPQuSVO$.KMb.:DK>WtWǭKv4@Va3"a`R@gbu%_J5Ґ 3DrH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{Bb$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"ac n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4!kN<ƽN!n<#)u62ݽ[X-w &ld)r;#Q'  yH QveJ=WhwS]֫l"]Јzg6eze;\Mdv!E]?CLC4ʍ@1Ssc;l?ߨG~oB(ъ{zZJ }z&OF wkߓG9!1u8^drKkJBxF&+62,b.-Z*qqdX>$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&rc'}R\H=: E?ܰ8_r 6.sϸRw# y'uR;$厛p!&*̽dPt/ ̿f3Nd0 ~n~YͤBoK&9<{̻*RmသLΕbDOJx߭&~+WrVXӼSZEY|RyZc]/mm}àpGg.S[@AeE{0մ{b\N*{:Mzw =lQυo,\*wٗYS%g,0\ Rk k8P>x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&Atsck vz(vb$^Nyo$p[DtUbM6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7_=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑlUs>"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo?nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-fqacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$3I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{S yNwgwml"Ms>\΋"?|NKfֱn !u:%lXZlvwohbL_#ǂsr_d >04SRm+0^PTi-"] O('@BKD6 {NmʐzRj.aQcb^CZ-uvpr CѐٱlGNzIveca=%1Qi F>wTLHUGӃ\sA֎Xpljlv ^tSȻ \cPwίwX"{>9V0ټ_`#U8VdTtD_GU9V ұ{q:ObUi7s )B ۊZlzIA4S#x,T3ѱ ԶJ=rs>Nb: Q6ˌ߉J%.Dl2ȱ%ܱ&6XƟ6qg(USok+Po$lwvmi8W_VT18V =| ub6QWCnY'"*aN08wuSEAVخ m3 o\` sHc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIpʼn)dk!J'Z5=r&; (y*b*O_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.?I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sRmA>d2UAkؖvlX܇Bz1U_#Xӫ+al H d\k/I,k,ρ|`zR/$@8VU^rcG"E7\qtS:ڝUyy >Vc11*?xYa8U`Jw/AcL~|;yj8TR#s"Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnۡ|< a1s\ T5҃FZh?EV"sd!@БU ^p%pO3|B5=2怕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7{!fRБBSۘ† Er/IGU}APQT]|XN X]FbKjKdO U6[3TTX)|*H'2U0:VunBl  `5/@ա06VNO8VGON@KgjyK?Wq1egI+ I.*F~L!Gf"LD&U 6tGd#fR*c ^tSLjnKS9 Ȼ \ >lr&}+̼d"I va,Jm_u)d靕َ| Vw85F3Liƙb<;dM-})C?Fw*IJ_3UG'+¨[9| >80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;ogQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|$Ȃ1L-%;Ƅ{dɱL;V[bp>!n&աIJX1$9;[?- й vRCxKVV+#lj@_RL;IQ8ŢΌXD@Z< (1ZRÜ:OUM/vư{'jYXE4S/8 7: `/ +G\ U>]B2/n2=8) B gJ3bcKo̹ʇ\B~Is 2sO/I!}xV&\b<9$4Nve^آ]$LGF@LjKٕyzH 31Հm-XıUXF|\A-2) ' RG6h?āUŔyj[j_ӂ~ яA弆^bDyzǖQ8`jXbsK?l58,?YP5䜭ve9YFznTEf3Ja\,@2,?WYؾNr<V` =V[oB5!Z\ļǪЎr8@*ucѡv\[|s L-+y{5K@dzp`r"mũɸHNd"yc Pu>x2;W`_VR<aӗ&D<=h-Rר|/r _ǖھcߖ]G@Ն;UQG1 '3Jە Q88ASUȿ!:WѥLf21u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?>HC78NE@UMc8>`TvZ:}O KT1Z7Q?md+PѼXǞ3^x7`nJ\ ٞ~HJ-X7~U]+i*γ'Jd_&Яm$Wž*I>O|A٭4~_u BRĕȲ(JJ7@(L\BI~ )bԼ Wq{\~)ϣB6Id_-|ުA"yQṚ& qHjӂW b7?m3m-"u&L . o~]Ub#}~xn%hƸ{/q)/q&q'9Q_)y蔔ay))1SR6 +#k}#4s3y|- в 7\.`t7nx~+6$,uqiюMD(L];±7,NLmWu{Yъ'j,{|ٶ? z/˗\0P GM6`= rDZ8LHm\2 -dHsM?%_NgD^~oy!w!>aB5C|ԣ[13ٺk,Msq}; #iʋɜiֽ:Z]oRh!7 ɹoB?MR7/Mv;icK7<QL_eׂ«X{1IoUy阖븖K [&{"kW#OXڮm7"BW8[J-{c$߱a E&U &2+.}Nngd,gF}Ea{Qa2 V,iq6L*nl7QqWo (,Ckߙe Nj֊2.Jό}bDuygْae,#)nZa0=6t#7=kLsޥ?ZxfjjZgu v_r w|EO¹~BaK+9??e ~x Rx&fZYϧgOlT7V76<4Gݑ4kߣ]2t ĝ G4gi1O|!gNjs=s=|%w䨍SXqѲGMyʪx:V2ŧD C&K"kDgMRGe8;Ir|Fc[T$繸aSV=J58?!Yw?Y%ebl{3/^k^ʋBJA}GsH->~xMFDeoo[uQ)*rq cPU$ 8@Guقh9*hϔK&w4 ۻnC䘄޶E]^iIij[;)@l XxKdx5uJz胐/ä+ wV }k~Kfs{a5ub7:xx$(5غ@GQ`̢V{Kqx3t@EvtuJd3{?4^4rݒӮx1LA|Uޔ!)'mewuֳ]̍ e|0G0$:to{0 3-3?M^s ǰ(~Ԫ) 9x;!TJ:Y/xQrvMTyg1BlSNS?})j]3H8li$6ٯc_?Oir^~3okJB{oGƅ>nw@yZRhMVHFҰN_z*YݦySix}XICK;뫅xpEK4 (|A)zu~I,ϯ"T$Xnm@k63+CW#+O3%ܵMk"q9_|D+ ,U\4MvEc)V&ZE`Ry80Mɤ ~2D(JIo$Y+'w0*+|s*&"i QT'Z*H'Ui[yEw@R\a);MQsZm4Z**ZֆQerGdU1~NCZ~AguhL5ٗU^p~ܞrMߴ)it\x_b"egSO.ޞ%i,ٛ.-+)-㩬 Š\yxŒF"tPpUh0X \9QJ 3 ݞHsjy㴢ZZݜr{d>"pMU(3쾮\5dC@LA38\ql҃y!џO펟FyJ-ܺ@#ݿ&Zf%9:hʊ^Xj&ԾIX#!#$!*SZ߽BrX!1 ߿DVU: S*ަ".ܔRKUq^;2vIxogKןA2)+nyy ,U'oojx\Je03w:apıTk}]3 KԺ&*gUïsYyø*ON3Rbo|OEh-`ZSC+[yDQTXab_oRF8EoH-<8MGM7<[ֆ۪{w"m*Yɪ֚2!Z1[WiaHbJu4Ћ|3aBӝȧL]F\ڄ XkJj oVSײUj=wڒ E(P mUʹ1 ]WCU\ՖD:Yrx*mpSWi}I gR4I}NWqީ_t0A%J$bn-EC$z!.3m[@t{JxCD* 5FzxtkA*ntݎ߁87Yj=UmkWY}tXߪ5h@T@ڨ3|7g+DU+c5!_9n9Z%bq&Ze 57J=k-fw?24lm 34>ޝP|&fMy`&Ẳ!'0pYLfȰ4ȍw$\:, fTjfpvjf3ݝ(XܗwϘI32epث?3`;s;wsdqpW30f+#3LEQ3pfF599l$ fa|wgkʴPf"5ʝY3z ! 5x`vd@ cE^OىLy&Nx!ye c̰֑ <?8 Ha6`7 o&a0F 7 1BlqyOvIÓvB>Q2q=2! ~bCOawj:4rwd2#Mt7 {M i̜ZFاY뽑YNM@LhOO"`=2,xK\ !/a􀓞mo/l5^Hnv2Go52e 8TQ& ~ΛZ9.Z}Y9.qC60K* {3_z^:`pve) ;<Lٻƍ%W^ϔl6/7p&Yz%J>OYt79|b2$hZ#BےYV) ](A-?Oo߫+4(_>'MUtfzQFS'tcT]GL =\g2 F& L2g,=oNeV=9c$ͧ {OZ%[p0EaԕT^|{D-mJ?"N@/F ^|] 7nhўFv+h%`Q뾔4O y|'_Otp˳[,.@-+@iDwݯt,8A1%رb 4>v 8Ap_%X rb!(U ӫNơ"28Ahxu 0cb*5o2,]atf$w$h Px7kE#8x sܚWN{X4䶜 C#TC~ģ!nO;@N=!v"P@Dm1JL,Ġam!X pY4ڌ>PD  ri1ۉZ:QpkQxvEĨ#YB`;/0M0N&/ԂhXiJ0(AtSeWs`YBtZ.bӿf2u(m*ѵPī~)D6^ᄁHBhrw""VGe{qq#OJc;oh40 et|sa{j93ǢIН۴;sˮ\h8_Z2`ڌH`:Mea&ŻVIv#IGSn X ZG/ULt ~CخG8>G4@8X,V*A6d]s9Cp7=20')𼌊{-ͪbj]hJq>DkzOQVaѿ>(}U ?UKR{tl^Z!6F70; ᝒ6ɩmYKVfu/+hIG`\'f:"K-.n5bkZi;?->[Q2[MJMʉe:N:>:\+pTVj|9JFl"ϛ8Rb1XW#l7&}Z9a?b1 k$mQ6-}YTxwB_8ԇ/6ٵZ|D8Lr(Do`#؜j/l6-~ p7]0QХ˲}|IS,ݞz- 7l\Fed_w\L՘džҥm|3~/lq4KǬD`JJ E@0ۮ UY|(8;/2%]U5ai2@\ zIdQi$*;tًQF&QdƶEّ b7D:di3㙖cZ|,,yf.ޱ`v ,3ta>ڞoZ,XyqvC:m:S\-8|>a[ yZdMmCB1Ò @VFj[BQ5cs[5ĪvһgVZqVVZ~.<&D <߇SVZzԨCpm)P74P%+<ĞnRc0FkSϰAkٌ|kai 96tћ`lq>>L_OK8FKgB;qM@q:΋{4%Š\ĢAlq}ROt/|h2y 7l|oN[&W Fӱ=c/0AYԶ#,*oB+r9etv9&y̋myY@.=j |Z"8}hЧ TPTt'*}5f;p(J_u7i` Pv؎JS5FH[t>,7jW-oiOSۣmRn6 5H=l18F<6LB1]]9q4{gF,MvfzXi25lʿ q5j?H U H!B:h5[qTA|7І-=xM&]^־ʜpHj϶og:aj - ]~{ڷ?.&bℎ;Yh'OS~31Ac3$@X7DI3ƃUs>Avԧҹi%Ohk~ӂp-Uȇ&:X ZTP=mճ4U$9ObWt/2xQ xLy~!oy&.J nO(}- ޞPe$ނPL(ۂP=egʶ Y&قPg{B#Uބ>ܓRv{XdH_CGZH?%k/wO(Sߴj QE y:= Uzv/Nw/AHSY= s4#oi)MwB4tF@VB :M  N <89F+NAWtFfm4ϲ1Rt D ҟyLJڟo}-_f_=_s˶1NqZD#C)`V-ƭ bW_0-'lxd{ewF|WPw?:_\Py$r6lbk /8Y$X{q69MQ%p˗!L[G_ wEυ y27. X/BR4A@|&e2vg`MvZ8N I?LɎ7ғV4x''YR~lvzc Da+>fۏ%3%Y sA#,y:oiP7pFA x2hŚa%SR֝F^ZrdͮWc$JSzvk\iUBwgtQȀH km3. !1G rEt1F;eSY"u_ !k&c;PUoS\34j%~ RB?-&4 KQ+|"XP3QLlQMz Z[4CgXL)`J&"#:m^gb")$' Qf sx8 @Ob H#d( *Mz] G{PԾbVH2bf6"7BwHާ3U` GGY*aJ@nna> }vZ|ldGLV* 17Q2w6k]aۨ;RLba5ƹ{NFg}LgK.Fb{F%ʼn);MQ V1-1c!+Ѿ7JcMW\Gy߻fIx~FkABբmįRš8;d A,jG*ԿZpr +/_Rr{n@4.+`)#Wxlmm%gŶ%z]mA <f&3Y&ҡ/A%0ǧ:ir%M)h9T*Qũh5^J?.7IyUTT/HQʃYV?$̯pj? z,Q:%s!p!3RGHmWDS@3_/czgV7y /-ƅaiQ]m+/GM@BbH0bh)T۾_ZK0z&틘 7 C)}5@]X_^v__^rȄҧ/E "ã_83jQ^(¾Ho͋&DͯO"kˇ~Fu:̨Ua3/ȘMFՖZꩈI ![.zmZuD|(wTEy̆HT{КL8<=2?6?Ÿ 0x鷭#B'TH <#ʖJ(ڗ?8D$D@)띅AHs'}JAUy7}Jxm7^4vCjߎ#61hMĕN[c,^Ǘǡz+VC F6o<^iy6y{U?&o3{񽟕|/vH[s c=t=f}SY6)8 1 8iVI3fI²OA7!dtsäg$vfb98z8[td<Yd 3Tb:B,uJZAVD.o,u"˿; CqV+`-Y47Jó\ԱE<I,cVf 7Oet5pcidĎ{ \mF*󼟓KJFf!q;?cİ7, vʎ5/a5  +fY"W!zI7Zn[' c^VZ %6rZ[6a%gYXءNuC}!5 5m̎ ӲKQeOtcyJȔ 3Ni|S#Q/*H>OaM;ՃkYؑfES Pv 48+@km712rg2]t'#Iptcqq& Nzq䉥$ &y#G>=kx~Fd["&Ua=U7,J1!e9QjÝH ˏoK#IK>xѾ1 %2X^Ss]"2F 'ZF% LN uAkC;r X *Wp9*ǫ,T 7K3-o$8ZDl,!'sf88̊jY`jjɑWϓszfPsCON,vd"áJ$pX[-(-B nuٲ^}[oR\Wށo*čdZ¢וe}M,B%cİ}|v~s -<ԖSHB ,$"RG`srGOM$Ѝn>U^# 42X`&Ĉar1 {J*~CP:Ek9i##k\gkvtn3-*&9i\$VY=$*Q zWP^)@ D.dࠂGbA3*Xˆa7| :?ni4bds-;iU:C1IdDΊEwIaP{n١,`kxC5S6 ='Z\aJ,P75d zlq\֬6:+}ts!pPI)\Π9;s]cC̃bd,ts^_4.y%]Ny]`9g!8T2G$V,⮏Ym 88:}~I2/v2<髽)׻{'w9$e\1塒B!m2.0;rQ,\&_DN $\e[G ;)NIsi9+)Ёy T9xIDH $F.Hp2g5*b8]!Z B{ۀ ^w<|;xW4Ҏ*8#R.A 6QEz5I^|G'G'7Q*t_pH&JYÉ=;ϝ'\nh5x}ƴ7`\JT~x:z}ddۯ5;6 ~ m(%,q L(2Ʊ#I#7m.} |ހ`FQ2 GڸXb>4]*U @x>rݽ]Z!V۲6 X4q0byl"pp'!Dӆ{o2dȞA$v&'2~H\V"qJcP#3M2! & ';Ly|| '83KY } \KXfr$B܇fd"A /J ^C)|&d.AS'#9ʁ"+˰''Kh)«@ÿ́ rܙDӃZUfZ!'EVpJ ?D(!e@AwҠXcnd .z&qP_a, ܶG`[Q**dž>@mD^;#R/a6ǀb$aKJ:x&Jh#!!5 b2\ҍl￐xr0ܲt)C9xX{BZ.tXĽ0K]ٓ/hnّ9d؄03=vSraM8kI $op+fIz'y$eѹ_zh0\&f@>շ<453TձaQ3т'#fr @QqҫR+0ETOI~{}&_)ͺ0W`#P~ $AQɆw߷udYj':V"!ڕd(A!FSܓ7z bf ~Ҿ)ÿȻHpw1}sM8l{fo0,כ5$0n[m?cT1IeUgHaiOl\wUhF3H}s3)[~H'+['tEPp [EMsXˑ+Og+oa&ÃGE=9o8xDb҃{NkQn ʜcߊ~끺&NCtNGOV)T!1ܧDҦֹӲ3-ޚo#qJAGL//=V"KHx#/h؄;t]scHɬPglAn6H=A8(m阻)=mHpt?V(zF,DedRL_|pI'0D "ǼD&fL_T1B,rqiaw1 ?~]Gn$-Bzw큭WD%D=m 5<8x y.+CL_$8j+8&+ H̥*Xf O!6Gu_i> c6>1;^ ?YVC' ΑzRX8ګ%q0MERR̦%G)ꃣ~J~dZc=leQ֠#h8kV&ࢄƒcV(-XßKV}&Eu]rVKMǑpL5\EMk恪iԆ'vW;IA؏<>``8~._d/u))-*=iKds1 8ѕ鉃B2;ɤ6d [9$A>@pp1ujɖFv£g͢NL%<w|`NK)xFnQUnfVr.$UGmDf /˩5]KăsS\VU:`+gr#Q^U-i(, UGGXlSZs1xp2zQ,k0m1G~|XT9O>l,%<)Y }qx6ﯪz8 C ]bZitfN\>!ޕqdۿ 0d^$վ/cq0Z,"b3vSRSVT ,֩[.S#8EL,~yvei<-D?^q-|nnFԮ~;>~?il&.S{Ii5l'aLxD!_r97Xexު B;Nt}ϳ}&_PY~Bx) ptzl\EǟG;ff??U|p~>Pb+]P$z:tdZ4_dx,RVB{9 7eZo ͔ tz>#8%O~.WEYr@lD}?=jf~;')i"}TPbv<\-{c9qpp1KB7^L}]9&~`Pgs?ɸ9CƊ葌W@q^p T_Ffe_րW~rs~~8=h`=rKi,[cE?+IuVr-`zx6шn+k4ct+*Y7gBs,?^d6qG7=e./k$եpU)9?l0q:(f -M͐3sT.&<+^y3#TkWOy/.ܒgKX}97gϛZUV\ \a@(m10Y]o!8LPJ9~\$0 ;#wg@YEGOxqC4^̏a6r{ \4PIiW.mk OZC P Ҵ6*ݕJk't迟q 2KRڗx[·5uX[t)FͥGSRx)'%5q_8 V0?}r8,`XNA͎OA_Z'ܸU4wr84(\why|vy= ܇ǎx&液RF.8 ldgXմ3,ҚJ^ "'E*(QavC5y,}m,rՔ Is(]@J)ºm(3s(zRhS&{7cm$k;E.;.(X6_w [h:\v@\1|pPuIZʺF.Ѧ; 7|[`du&-bmj)PvɓP0pIKSt+8{ ?JTkB;i!A=g~|q1k p X98ϯbXn-(=@e_2{Qv7]dz.n5m6km zMăE{p'eVm:k)i=}mel-Ɗ=b%Qn{fއ]* zqmnc%u-` J[?%if|TFEQ9=*WAf[{*PŶgxzӎ|(ذ f_vm]%7W|{Xv o=z녻T p6R]]残j b 7ՍaV+޶Cw"\M1*bx_Rru_㯟?ObdHǫ?tV\;~h1ߌ7,O0]N+o \$*Js(y&΍Nw%?}YM+*n+] ؀<Bu^Yވ}UiWpT2ȏp @ YPGRR[3.4Uǥ$hAI.KE!ܩ)MsF}pN*o%AzGǐo<.Vh\67"{g W1_Wu$TKc!QA$J*#cJL77fX+㕬[U tȽ6:4Ah>t+&qf4f4ᚡ95n'M$F\H$ExIk12pkRT:&jӊiFT J; !-$R߲e쇸^ \"[ÇgH_{s2|;1x0vA(gpp1KBLH C}Lo[q!ƏJSQ>)4oʨ, \ף~ǰ7J a*Qz3vQ1>c\wB=cQRO\ԓ9(r=p>}FX#+MbQ 畆gH{Pj3*N6Y@vZ?=#q=Fdz%RWׯfɠ (^vDFd_>/WfK: ͺ~ ֧~vse0q5Gx}}/fph_{CsCo{Ó.}w*|4hjop4$fV\ldSܾ#!5 lh8*=@WSPKmUdT##_ϻik4_]+YkoʷfߥAwx69 'oǣUuTP` u1q=JTn{F衚=|H[aWo@0ErCe/Cmkh+ {g/~?.OmwNZp4٬V8WcZ㱜ض5`*~A͆~9$Dq)]GL\O`M9]O\C7]KZWv6Z!&Ig7fAK{s8f` Ȍ ,!SHLޛnʚz&bJ`F?h ..q5>NJKᗃ9e\Q K0dF0C#>U} %#huDU>._?kõcދ9_GCUrzLzb!|sr>I2>5rKaxO\jq%*ڐύcQ57{lBr&M(ɠ$mMPԂWP";+qmJ_Փ G)iBH')¥S4$JT#=J&j7a-O hAphNбɴ 珉 D* nubPF>!p)DM &kU NAfr.v`覀ڧ.1g55)qVi|  )ZPk;`aS@!JFc"b+!ڵ!!VX $&Þ A) P^%0%`œ*9)vBh];4F ):m!Mn܊qDt`pڅ[ Z',|D<(pilhDFGbGplҶJE;m؄&hBkgjk|:NK!yN1-X$.|HlGá\vy7FPj%i!;ĥ b錁Ԑn)-r:tִAP݁+YǘwctHi%V#P7HV\=M/+~с p/+uIWvæX\bl*ztFh>ч-hy*:TLv3˖rK1k0ꬴ>}.9 $ d'"X#ERY´ֹx{e UĎ J[e‚g !uT#M:x!6\{YC %{*@M$(%H"]*<`H\IFL T3+ ʜ7 Hm1~Z3+K9^KZcL()X v B8|o>rƍ.g9A*|#lƬ1#; U|͹(&:mp+}|8R:c>Z})Y "qb9""$Ld݆Ox hY /s$6gls*S8'ZAj0I4sDw>(x,yd&<(ʠs!\/!|%%%!Ԩ<&L$# 6.QD!RBܒl IT'\ޱ1-!Sl$ZQl2\pބ>p^А gY{6+wqiἋE`dA~I6g=lWݤ$jF!EI3zw5_YQb 5M&3VWv; P!nprÿ [g7c_z_K-*QP]`ѝWm'R](;WנRfMT&даLj2G`_#K /7u`3R9[TcK%qY:# M+u"cGkJ%3e,)Li6\e.M%[ƺ#LKѴZ`*ZXkKivdh ']>NYǓhʰ0>%2K/b:eAŔ(phV&²p)n"`@ee'$3;G?ZK.IWRvGWJtGW H:"7p: nq !nPu a]yVT\6.t :`')(7PU}ٜbqkN}tD9CiJgiKx&JZ z)F2g8d=TIf:tRiiZ~dlR2DbO%ZէHc+UCT]pۯ5*Kqp*"q;`k,6vB5X q,sshՔ:U_L:$'cL ahf"ɲ,M(4цqΊ8ƺ- %B=yQzs s垟 =ˍ/k,O!~)7T?Pki:` CuXH?nZաDR"HF/ⷡ Jsҡm6نن:6nBlֆSr&J.^i钻 :#KoPyشmt2&j\حo&,%dG2 ={},i ʥW߄ڃO]c> aKmq0YfAhx޿lBMك@0ayB/ޮ#{?uIxl0$cD8# \ %p>R Ut;P(NVdFQ0 ?kw݊JZGUaY.&86SNnk5j*a 6֍0 ˋ%ZM_kݝ2/Xn{7|I.CꩊS@=E &=o h*^#,4[+.w݄E: ۇ B4GXRcAI#[kVL zemzʖ#w J\A&o@ ̋4[d17o{5~i繦@ o<.*ɰ["G?…Qҟ DF.6NWaTie@Fu6]0//ADŠgo{(d{S.vEF_Ɋ!0l$Õ_}~U_?I.ߏNLwSR?Z[3Q#z8xo!,JDo8ZE9y]M1&"\ЇzN!ư)w<%G|6=.6EFԥ玚'? .lbnGL] nNi-F eG)BQ~M4 jϥ}wNQ .XSۊb`HWW_W鷭RmňS=K$0*a>?Wqz5y*w`4oVy5__o"? վsdԍ=l=OwULw6o`P6wW0yxK9~-WXMhJ8r2 ֩x۞|]E.8s(e7vS\ +Њ%x#Չ[av>r=8ƺ)wI~z'πY_@QdL&Wnmq )6T܄<c#'fZ ^2\ h8+~[^m-7 .m\l8Z4]Fq_ W9Tp2.xcvgX}4^͗wMWMmD(8&?HL>-`8pXh5&pdDqc@[hcU58n7oz0Zww]Y98 1#rmr[7η#/ 󦰩g_O&D0 ]`8ܸQd` _zXeҔ(z|, Ml!tmͻ$_ 947/hr<<=zMsn))A5=/9}G2)#k4ox4{Fo,nء O1-dnT:8ޑ=sjIc?1PR|{S?2roV[7 =i ۋW W}0&Ѭn%RLJl]QղX.QZ7~ X)%9ݟ~Π iT'7e.EK7Z-TLc:94e}9'&U&#w U{)!"ɗsֱ*4mf y@!8_9s.aSמL +hoΣo^]$x\W󺦒*w*׎.Y>˥Xu-x8-R؀^ =K ۟#KaK0Tt9ٓYV\=^(Mש7VR߯kj>9ac1Cޱ^Ezf) Qʟ1VF\oΜO/f$2`/_y%~y4ccGi@HdLQ?PxBZ9 P*yb}jm7(O *gnfgԣZCd"^a5?fn6ZL&y?Y*|Z)xy" X p?rt:i* >vJIeXZHу Q] 8`GSH|eejk:њPkL;Ew c6 T;Lʰ@ndJP*_pDGX(}rє*>zMo2jk x}*^mև Va0^u-l8;uro^ _32A,_R%'P:~(_<"5oчO73_vN#;mrg~y2eX"An3b I44| l$q+Eo\B ,}&cH& 2\ ϕ(fu@@B6 ]at}o~NPzTlTEոҠ7Wzj4wE凰La*15ʷ.gyD߼O|8~nxKvנ,*JqzD*A8FuqSd(њ)d( RU̻% p,/#cO±[Z)%W&È1#nDƦm&sՈMu鲕Ӳiud=3dQGقmE Gx8x(%"hp+/%X V!C*='JyHb6,Pb{lxEYߌXդc\t+[V0^ݻU5Z+]BtahY]&wN tJ߹ h5ثR*e}?4xIQv՚~;773YB,u-K%h>M9T>̧ן\c~rfQ@zQj1e)K\UoKc# ;;ccEN;,!#Ief(3E0]]Uy>@ݫA/fVd0,J149g!juJQ!%F^ʳ%WƜ E"0<2OsT 223aR 3]aʣ%Vaw#Y9pPƩDH`+$2q"\R@WHp Ն7E2; r&?>l>5^ioQ\> %m搏f"F$I)kʫQByMQ.Nsn~Zg`ց&1Lp!ƁEHb+WjÛ\?~չ%a1ȁbB)bh10!Md* ʫQBy Q09h:cτ;lMZS*F@dc!a>o\[+&-[+7=EVfxJME!Txb%nK4C_a%lWĶark{: fIIQS,҄ q֛h+TWyjÛZO㢆 #b3"1pIANpLYc*sP^mxSkv:YnP1`[,/݌7%ቶ6#r m1Vl1V.jw)|U8S.f aO)PF)I]{,{7 #Ry:|MOs .itzקzϖDGP٭[ &Zc QRdc Uƅp^eҫ oznkm%8o` C -YXt$rÖmuiv֞`SMڬn QOy[kmC)8ChRjNN.zD??DeI!̥#)`ަ0fPr9^,Z@֚XޥǢɼ!m#e18d;F-j.{e[y=)XxH9 @g ('g/Ϸ/PˇZ"^Qm ڮMv%#=H1CG^Ìx'U.kG5%Co&yM0.!^es-+O*ܡjKQ]Q#h=`a$ѐ)5DД1Q:/0[>Ը֌fܦe}wOfx#X$@>H(ZoJCMZrMk ʟߣԘ6[';ux!**\${!9Į%hMVV*^u"?jP+6::י z(BGcZWRM̹@z)ʈu >pmbjOyeTFuv2 =,i o GJtp&̴ UHBF |v=L*vv訹th0=f iw\sKHjB<RBFZ>f'|#F{;t uNNطQlD; H 5zbK![O{ǂv,L)94vKJ:Y’M[æ"=I9wDr$x m'C+ц}j1hx.Hrs,)L*LѥfPk:- Vy1]W0N!U/!¢ gL2IR^$QDiJ;8t[!muV禚L(+b:&2 ,R_Ӿ9؎3yCH%F#u1#a=FXpdeK\.rF#'fF<63jP3%nvTKIŅVάaj$׈E88 kd‹RS8d gۗCMr{KΚ]S=]uӅKaۘAm 2H|qϒIy5w$xۡ[.&3l1= Nj\|ۦCiYp)=ʧ${4h`v8xL^U&qŒl=DZ bVk̈́,ryձ #d¨CMn wcwn_HRNT `e*X9SsħT@0Vav-$Ǧ%rrAiY(|fP*^'fzĸ&lQeƊ}i7ѡ55ɠݹfUY)hnp(I qJBF3^("!=(a[ưК2zk!8d˓Q|pjUAEwW̴ ո=v 5Yçx&|UQs5ª! u2E=xK $i =,q3ۍ|,7ەaKz5yxe5j1PѠP$4e-ns hcQscәͻ#0|:*&p{A%]ƌx4 xNAͮ鱓B=P#q7"^bcQ!/:e:3O_`=).s&i' fU9%azձ2ڡjP8vҹ%dpJD]S-ё} \΄Z>дњ(U ! pDMM\x"J@д &hT> @-jrY5RLftUQy/QcDx>~ߐWE(Nw'U/"ڗب jnq5X_ӱ{Lj0NW\M7'؃c w9XfD*oK?Y^;[_ҾD|Po\,Hdvy_5fCj({ 10=yNWP%~!jI_UXݍ HBa\j 3Y!M%)Yj}9 !]NB-j G.S]EG:mn^x)҃%l RtؼW2(&—s;-T CG5(i35=uD!YҒ aJK,W{,*BFZ>jĻcsl˳W't0zf0uRc=JX|!)8n@{(ok,B3U={ -tU (\ ԞmH Q3t!Տfn{ sB;t\6wktֈ1|<3mlM3_Ls"batw9H0ݴ:*nFV'd]>\ R%91nR$g F:!68`4`t[C :tԌ8#\66_vao9I"5D NIÑbHpђG`A[r)"vZ>Դ#\ ?Wc0Fd,M'ɡD"3t>JO2g1]7]=BG$uT9'?fC+r`Dž`=ct;|b\,kTH(mvΰFXI)zգ 7f$ Z}sx%bd=?z$ӊshHՌhSo/Rf)U>ChgyCM>bIaC>+x%'&i!- w"M+4`>#gs"/UE؈)eVL/r/$_9~sۍA?[1T\27,*L>#戙n>7~xCGwn|[K_W ]ns%ײ&X(*~; 1rv:rQ1X>ͶdO1Y Ki.52*2rAP.f@D|3`up)Ks$RU\USGnB xl?wϑ|N^<6 G!>PR1j#CBBt!UH!4CN0G$FъMDZ^nnQۭFl!|c=Ln; nX=3!@v,|_!IrA*Kw訸@ϵ#{%%*%,5ט5’hllK3,ͣgQy{#D Z>#Lrݭד̟HҤAQĺ NSQ;])޼m(q>eF^ G5 ~Q =enUzqPz_eV.׷_AFS_o!#Fڍ,67%Ju؆I7Ն%NnwD4'?4SGmf!# u%.W5lkTR1{|KW(Bfa Ip/)r) q-śo yKm Fчۓfzlgq[]2]KѶ('wm+6Y@P { d#+@|5rY!yBbU W8įӭ>l%Tۇ{;^,ھd3϶or7-8gׇC/VRDq_Tlxz;`1\xDxmˎKJ|PYޯ!eR^z5odʇP#]^^/$ ~r+&_q%EMYwjܪR>JjU/0[*݌ 8> n~MrQ@$ 1' c <4yMcYuߤЬ,~p.Qtr&aKqa+uے!+tvՉz>|_oX~q+ӣѯͧ(rX $*XHNO[WeH91ap f/JJgDD\ _Au%ڣѽlx MRdwXȈc\ [Lq- £;!"QEf~+xXm?{F2[ƺm]Kvpi D۴ֶh+"+L1\4'avx[4ڷo\bLkz4%$(uH޸S qnGw1-?-)ohaH'jRɨHJ; .S՗ ! @o F*hF !xw3uD%x%1r%MZfk ;30ؖ($Q'J%ڣ_$ ( hF& 錬QkNZkF|#Y2˔|)*<2;'˄i'I 1^=6FU(:42dd{5 ܧJh;DyXU$wY7 a?>FF~ A /5l`o=(j`sE—t˸HaA\:-".Dïk޺{Vn=EnEUbPR.7phl(r TzMa„g).7[A[Up5%ыg$jpR.VƋQ(4T_7  =Eu~ wRb9gw~Kr`k 3o19SdZ1mkZqqzC){77@Эg*'7>--ֺhZd@< AI'%<+D6ɔiJ2IaflC.rMKiR]eS\6Vᤜ<03X~+.isŴ\U{PT)B]Zswa9V RO&n 2sa\˜̈uJM6P{q+0i4i!o)*@)&tS+Pq]AC$<4Ws-M$6K_GH |t(Nxo=G^~<8: 蚋qհ!w`#m9Ӳ)) Uv4%V-UT8W R-V><鿇Pq g(酵>K)Ż( !?J*c -@5~ xcQ#"R 07UE 'k^{fk013 [7(/XX W|v*^wjጻgbo 5;O~2&q}F  %b%BNkž# @ _d0\ۿ=L:L=~?RRDVK Ӛ IiWܤߢSwimrp]i &bY4-DJ!m#'xeE RJuieEObAђBB#"&謒cňUӊ}uF 0Yи,zIZ&5~ ,\Ȣ)%"{1d~4)d\BT4gXx'vT:PP^L\S;]\?/M{1Kl. rM5-3aF+U)y"ڧ+mB5aO Ň:荻- .>ԱQc,엵FXѐ*:X'u5Gm+B(դ 0kՈ2V\i\_ڤ^ӉKfQ@ mVՀSD>^јE aR2Y4&28O*kqF}mjwNȸcF|mf, :ٷx- *,sZ.W5JtS7$0Aka_o[xm=(,zC 3މ LϺɜN4kt tC{sBcd |yt+ǂ!OD,O eE=. >D: FK%5&;OR1t0T9*CJ/. ع/8J+̸} oԣͻi6HTQ,S׷w5B>>l:{UƓ67݃g/d爱UJe.HfByW,8_iF_w$K=4EBC_X )Az8((53RwN;D=5e J67dqշu^lgXQ>38G6<'˷A<"(n]xS|SW_$aoz** 藋/1A) K|T|K'oڟ~yq7)ׄ2?䗛}YKk =q[󢽹z\/Ç?Oli'~51eungwGy)M˜zOGO`q!锊=yG:.>?n \U kudFTH~+BLSX V1?nپEN'b^ k"=G1a6b̿˫rR?īLju^Ujy 相j.)~s7~z4N>*2>*^6[ݮa+u{<.IMX޽v)7'٧5oƀ|%ۛ c[Oyo7mn_V+׏)v&gnm:3t_k臼WIWNul U1HTVpEo^-Gr}. ,YC7rni^ȹ?wA9=C_ck{h潌讧l::IB]R>m*?q"g}mqqZ mCT4iKDm)Me>lTaoTԡP:7*($t[P>PATq  ֡ѳ("QbFH7αs䚭@nG.:0!N3]a;P i/yZ.q{%0 & kb;|G;U :i\($udJ(!?pXsfe2Rk vRN+im@+RڤiyP +uSBʝ[iR_%cVPYIZyݜ#F#-}T-d5otcHx$!epCSV~$Rhͯ[ z^$iֈY]:wwZ;'[t"wNjG0tLz?ڤok2 >w1u!7Jw`\eO?ڬϫnWfuvCP$W igPGuSFwM>x97I6]|#%&T N)צɾc&Mn-%7V߄/:߮`1avYLJ;yGB=mޕZ㩹0:뽝! (+ökVjI.Sw`a F[{$#: 7V ⨱SV?RWc5Tf~C$Y~zeIB΂/zEX'/V\>#냢t[OlYIטћ%Vt[dL tuv3 2[[ 'ϓhy A`4!=œ{7 (PgJƈ{gJΙ/TI(7wcF{g  ]8:|sX* #{s-_h촏RF L卖0c6bHBZ,󤔖ߙLrx#Ez YRRԹ0َk T+ϖ}z79'0ܻuܭZʖu YKb>G5d??KŸݘUvƒ]!(=LluV*mS+=\SۨRg}tfW}2 3$ Ȝweq>JK?C;Gh|VL#bݮX)0I~kb˜cnK^1RJGK7|=ͯFdCpVKzUk0z*g!<'r<:Kpq=P_5gvU |:`[If#28rme1Gr9:%I!X+od̗OWNJ1tFqtXB",1D([AB.?g$J>e,'pPF2q5%Din/ ]Z7k+  %#Ijla+ heDq'X`Z@ES+# #5N61sDQ+/`Cj VŞVK(jn G0 q1xQ!28d| jUhEbSf3C(Ŏ U8KsX٩AC=2*Cd<2 um>O~j[j_c'j Wn# |Κ[JFu` DfEJ|%X0XT'_|aYѪ6}d' ȷk.'<#.8j4[8Rِ֋1`g |9(E7 >ɰvGL':^Bс2k]`bGC1+<)"NC6G'CWF68-]FIAK Pv@,ZˇZ:n*@惘!ߔ[ͮCndNK237 ,Ȩ d7l~=mv*x`a@HVuI{c^R~̾ٙ=2*CTYAT@*{6.e {Ʈ F9f2kˤ{dRf kvji&~́+!/Ov]~ߴz IpZP 1Q1eȨ F?p.#{x0(prk2K=2ZMMz?)Yͦψ2]K~!9)ʕX9OzT+" iSa~ո+g}yXHT(l:AFDjeȨ )b"崟qdQ>~]\ V6QѪVFgnIClk*R>YS,db7q  gUHuHq!qOHexD ߆_~]$&Ӓt^Y~-6gǥdBjK6Ar u +iȨ u;Wc)12哤7Z!_PC #:褵ϜR0Ow2Cp[^շ|p$LQk)#9A\[ ;uw2*C=A9mzdT@pwD `>es֪hW G`cehF`..e q0hej@{];Pl~i@h =2*Sh\['dc['d. !#0SʃpJQxswZ1F!p-ۻ2 `!Qae=lݢNA/ǯjU<S$U'VG\GIHGSjK]B M;D* E;)z|~3m!RhHP\>R {GFepH{\ *Q Ѱq9N#Q5&+GFep8i=$1GSsyMnf*HްYiGWnk[3sʬ{Vn&nCT6_ Py+]Nf`5E|2-j[5d,[uad.ԩ/XBG@2 1)VBN4~)PufyKN0Ej7`ZUa%NgVuѮ,uNeU{?OHexD{O Fd@J3l@(A_D G|٦^{'2Z@.gIdʀUCX]uqช? .Rk#2~\t!љ!=_GHj#ۙ焍 /\KƋJ ʀQQ>[w'5 t#28r0~ 3ϓjW!+XcҕsNң> hIGF]pDڮW,ےᨕ1G0i<,{г#KNTƉL9mE6D?E!7D'`j뜈F}Y%28Y[c%6iʟO7co $գϾ&Z!|"Šr4pe$Dq\.R{dTGbYIrJ`'`n9+v#28WvIlP^E"] Es+궍֥H:+.h6uZ K|ϟ͐KZQ ΏPT# {aiNFepXyTe)=2*#'?o*!<8d|JjiNFep$7.eu)`yӬQΰ8ZZzʌ@ϡ!ock<+l t o㘂\*Fy)X`uOmD ^ Mi q)&9M<.ѭ-I˪WI1n.EM+L:{`t``]MmpouCxƠp7m`<`wXeS${iwW`݆ŀH5ulOf{HɲȎi7K$Cvtx& 2u/vq%C۴[1U] ݕ}-tWZѯ"i (,Ydev*V*7w"tg|9ճorɽ²{]L܇~`Pg,!t>qyd{_`Uɽ"ΤF4PA횙w:d$=^ ,T VapY]捌XEAsSj I25n4{u%(O:!,5%kG@6t5of!$zH;v~aG`#KR?]#1U7ֹT +KP=~tb2:~9ɭt?za_C s;oN˧a o L,(O$0k/D2{U[RNMvQvbrc[x=QWkI+6A͏EI񄠜8s̫eR r[O7N X3R[E)0b>Y44+=%H{V4O%śSW;ZR]p)Lq*36W#{I"8m_vp&"BAhDҠ@ QJD4N2 Vμ|^uza9f;o*[1Gz|T@6&(̳=-Fn0~[w`aM: 3r(gW*],6":{YVy;0ٟ}f&[rVoqb| 0ʽ" g4/]9e h[]nUT:Jҭ_ug$V#IFU~M rExf+x>=l[H_6=jz7,wQv{9៩?1xc5Z3 s]XޟhX 'ֽ^m-*/Q"x4 0"_09̣e4@F7xm[- ~Oj2@3c~ #^޴ jnQ!1{,rEoK,r+{3tǸnЦ[N0:?jn0ёYxIBJ`ɭ"~DY0ߌ KȿJ=MK{QhO琛YıFC0BCiְ6=)rHEh8i;P͡*E("yRSϭ(Eʨz`H6*oI_n ?/&cs.O׭=@5sO+mNy}KD-s97];7llꫯzg_zOh6W /qxHɝ*ͧSfr?ha׺b$_=APD*QeBЄQAH!D( " mA-;s 7~vэb+Vg;g^o40< `!=>wnG^>d$Ւ'`~iq nPk=0ȤػUjy{CLyPfpZTRl0Z0y1"[.c!:xykyȞ fSO?;x֍wnx֍w^w-7#+E#0&$VA$*"LhQ,1 įޯv4.(}ga⶧Yk b(TMr(ܵfunm?Jy4 ޱ}8oGWwZ+]k a֔@1* Ӑ`*w-j`8cECqJgWX^v|),,-jn,|8~ҫcQr ʷ|e^YfٯT`0Oz)&b\s#A{!|@;dk.A9?q s6y#|F^4Jw`-"To4FҏH qSq#zkQz~!} 2P 0*0S#THj tN0q,$R N݃nnJ#8{iqƬE0>?s}oRΈd+|}Q>;>;H,ZYG%}dIYGG%TG<)F8帏_m>~_>~_Bb%Cp da   GxBp_uÜWWWL<\0qX0]՘f'%@6q.;R[7,bI#$i/ѱBF+q=I$ZvXI2oRdymoAm7ncFiu 16ѨƊ줊̭ARȎĥZCۄ*O`}GAXOv̻CiPSd#zlhyܹV1Uf]a2yX^~C6|ŭE?9& ,pDx`hܻ4_6}:16an $o2*R%}͉w;L^Zdk/Ⱦǻ..ai6tؒc"#^썽eҊR>`̔Գ&w۠n(&i> 0O@&o,WyQiv2QH5 PXj ?)s?V8#[_%փ>h4z뷥{瑱Ӂ3<@ҳEj*uWJʦ`Q{u%(O:A,odBML(3~du;jǼ{, `*:xkoMq^]!q%kӞxVC(=ߥWiH*$5!nm5m֐͸R-4բ-lrrjL)Kw~%Fh& W滮ԟI$՟TvA/UR۴[k;SE>:CԽv_9WP̂%(`]i-,ާW~{K t̹IHt"q cXS,/T"|$DP=,w9#SU^𘊝oᏟ1])]޶= ٦О<-⒏@hގ0蝁ջZ|iNx4bIRHE& HHnqŅ$6Z)0LPHVѴE1 91HʁpDBvSO0mp(yY -;i?swbxX݂XQ t[< 7z}d{2]@#굽L/MXرzmt~1X8WU2q: 3)CF"p6f<bEAmMzXed,o:,xMO~m*u%2!L`yH:rK.h"$ c)xp @ #NUU#@7҈rGtڨ[*ۅϭmğhDIbp-B0yyWJAa%&v%a/EYeIc^(RLP(&*0\FBHrS c Et@ ¨WWCR5{I8ð Zy=f T^,uͷc_&/̍d]zȖ3k{#{wfʓ5lPy(d-'S]gޖ8x6Pzh0`B 4|!*e4f9HL_[W0dq(P1B&RI:pjtj[5Mm4t%[KXCJWƗE`gv(`v;N6gr^P<B0C8p*iB Gi"qP[9mwf_7B$L ̆AP(Vu޶=SB-[զT$[H I)~`):I2 AD$cu3b0t읬?̧$CkўxZgH0]rǐg5_G%H0^5~XBR=/S$L xb,g1+=~.jKT!WL'C ӱuj?~.J@ۈ9~Pu =%H\eT3|ĊJ.FgH,NڶnBRbMGHH{+3f`92uܴ%#$o$)LFR¢(% !c$toWK`BuayHCw+]ArjI*ZV Y8/羷S9\x|2p>F-'RUTo O=Bk2Q|=qbUKDio ؉2L* m#xto1gHDZH#c$/SFZjל,,mС5H)|RZdD&-p8l0sJ5%K^mVT Kb";T iIZ&V7E\r hz9*8q$ ]E}/$*ga_m*p|ۏQwbC㕷8ge:бd  =[izy"׬TELjgECJGHrzQtZϙ5WPr)C¸ģTo=Noޢ sc$toy7MYpR*--!@Ϗp8x{,O7O^zu I%|EmRÉs|@#x):{V%YĆcR.*]P.7.V-m>J!V v []\|zzܨn9H[΄>2d` rM9$q"fe`{Q;x+ E GX'oPf.=i#&u+PBdz¶gGHި[ 7S -K`aaxpi``{ޥX[(ZR!#$tolLKCӸ#$*]q3w{=kE ^jHd ={){Od,eZg第M=~.Mr*-rKB&1`c$t/9{'A N[-Bu̔0"%cRCGH^h;E Q Imu>d1zƽ*c!C6f%̉LlZl# x#O0YJD$ Q+l-K 9c$Ie<l4 -!S"1?FBs{bvud-rJB!7c~.MwWj\{Dn-1GNvIJ2~c:g/Y(Ų2Q!e # xz|xn8z7,`qqf1zN?HiugH?]"<,~:>xs\!hF5߰~z\\]]E Tεܶ#!5/Eˬ߯2ԼJ{Q*#k,* ǰPxy޻}{уճ?D/aUKN:PD A9'D:ݹpDUXD R>ILEު;u\^(]2u3B1LZvśeuk moZY?L~n{_Ϳo{緶}C{b4›0*wxpv h>>[x۵U!\-?7/O!ζ\2W7@[v݄zx?Y^RZw.îG!?} \o0HvR5~Q-){}S+s0/A!-v_U<#S:z´Z§z y LMX]I9k[*" "]\`$IICA<'Ϥd Z$L@6chch׻ e$Yo7 E{wOMԋ`}{_о!7՗{"Y7G*]ŧŶVBSݩRs[s/`M{?{>*z/MG\oHĚh,I$.X\DYU4Wr[IeKhZy%qJ5קRUZ>cfc+3%NjTIi w [_ITPW gL.OIuFUQM3E$#s0:gnlzyfbz6atS۫#!J+Fx~"J!hb()!T1K }c![4.aXʴ$~ )?k7TpCs# c30`70٘v 34 1YTaq1+k'\[]nΦs(궭<;˯Y{k 7 'YY$c2 %_ՖNi$RdAU:xQTY1]-Tف ҕ|_=F%$~l|isד6ݺ9z~t>Lz> ̊ުms>tD?NdbHl L.%]e@ٷ6]_]z-m3U$BYj|Hľ"v/~#$]݂ER+싣<atkPzd<_~Yfe/S#T)^K6PsZ#Pmo=UpIБ>^쎜'JfWOm#I,)?Ow̡ hޚ#:}#zr܍%J#_0UW$o:s.UU=_87'ky4[+evS O|$ù{ay~cb/6f'{YvjH\yMVASOv݅;<>D.??|Zd 55y|eX>my>'/<Þ^X݌|NĒG0Yx3T=s6SOC>% 䒟YP$,<ޞ,a9'ecSM>8/1M$ueWswoi/͂|yvRH 'z3YyUN.ҕ-?6>~z}~UX CȹTtz=[Q+AkuDwЮʯ7_/1/{V;GUኝoA{>QE[(1^L0JM͢L.ꤹSq"&BWKB49X~?Ej~wq~ٖ6Η [`،x'y+|LIlSk&k5֝5O?Qb@GJMNEDD JLj8%,HAb̘E4?e NVC[9s 3St&P(+/I D0.ǝ?s杢m0zf[PDJJk4IRb)b#DH A 6Bְ42LrGF'ltB٩9o1blLS hr a^%Tb٢:n "Z !$>|6|HK&X!堲`a"T?8^4؛2'S}$" >}<>gv-!!zLvo W<1/_ %}%#{3W(0HO>̇|/}ė_>G|/UҮ u+H>ZX+kc|V>Z-Hrb>`10|c>`l""hoMsPq8p k~y529 53 6d||UYU2u>>B;k6C0Ti|R_W}u_Wy 4|us|u_ƃ _Wi@`R:G@:N4:N<4=$#V? Ϻ8{2▱4ԑR!cUQ ƙVRL<$UlHaZ8 T5l;.r!8'jbg)QJ4J (IVrARK@X; zڛ7葇9x*{]®`bz9~tO4 X_#!уzBZZOɷ?M9syLB`0$ FD¿c]H ɵ1H0TL1O MQS$SN]0D,eTS)vT$0iSdqLHJd#0B9&$1vl8 쉓Sʿf]g"tsV GV/7g`tp^݇/rLo+ȱ0NS4Q qL4Blj$.ݡK@$5!#+VxDx pBb6?g#n%d[o~ԛLJf~P㡝^P4a\8"kj-fiØ:L 'qH,vȋ X>)=ng ,2eq)BPX6X4ȒR;Tk%ijAw7|w9D5;ju;+;9n=*ŲUW Bb%= 0] '0b<.r `,wCooZQb$#($0#].,nPpHCAS U(h(a$Af)\&n{0 ڗйs.wξ+KǑ &1<-yax.1h ~ǿ v"Rt=9V $ 607|#%ZmG̓]Aom#HdɚܸX+m%Z^G_M߉v=hִks,l裞vˡ%ZWێ`K7v20hʜde ߮Ҋ"-ňk MOcg^`eŨ8v4OysDLk50"} 9R0`朘\G k{wP@^f6T(Ξ9A OXSf\'`?U>,l6 zAԯT$wbg.瀻"i]֮dՃ8sAm%b[)ضK _r`+KA%u Tmf%F7Fuv6p{K>6+^~xZvRѬbl;=q|>s-ar`ӤiEA!ظ.¤*”[RyTQxyݶ.y{HwcBfa] 4^ctO^avz!襹 wd}ZM7gaeLL{'P%\G]|[{M-t:q'5;e>݁qΞ?:ZJ|9Nw"`ݫ.cBiD7 !.03 we\wbEW4Ȧ4h_<^E!LS aF2'Vj}%>uRi#݃V  QrO'o[ݳ儀i R8+g0"ۑU:)+.ūLbJ9_x?jld-!!;Hi$oo?oq%HJ^48aF#o4nb@ljh}w&2)csi^9 aUW\PU%UCs˟Wdn(P@U%(|EqyVfE0 7v5M(>tW[Qnq{-rU+ st:miQ*Qt+_mډy} }pfsoܻ͍9r4o;*Z Yix \|w/EոSKil-:xUCwgu"Кm.gVFkZյh9naudi_ꖬͽɻ `9&b+s w4!]o/τϒ?ǂr2 㵗ie'ݡlfoݯHL]a7eWm"e#I F,dGNKk#CߺYh\iS5oGSٷ5iMEHPAHhKH 0% E#K6Q^ mC-:38Ұrwky(U}#Q|n7^|ZJ%zQI;z}~Dj ?;ED1F׭3Ʊ?̫{5!ZrlTZhahĸkR=7& u2mZK>mR͟XՈV!eh;GsV=}d! VETp& V")6*)JK>R$#"CN5Q$Hyp H p)qd-b(pJ'nbJ%&"6RhdX; |d뎅GHI5̿`9[}.Ly],˙ׯ@Mɺ  4&$B1H1I`*M#RDQ$ sI a2r+hswMBDJNӄIEdVxDaTi$$F VP$@29}1c(`s;*{V9OIXc k&1Z&3Z }:+juV $ bƢ45{w^5=jj#H#lXHBjW!2 bϕEVQ NiA!ԭ 5A^GCkya+ +#kIe%Jl"ʱ,%[&%[O>sq|u^~22F.Ry,ͩx:!~˿y}qt5R|p\f5,&s Fpp%_[k[q,˜} W.Ў^3zXmO\ES72 q?NGBO}.5J4J^li\.AHnx(ZmqQF3?Hʍ(kȼLWux|qn4 \^^RɱKh<(}&E٨~;*LU~ʧ8_ +$y>9:wgGoO0Qg?;= 2/7 A.H0;]t}Xv57*֢k鬋o/]NqN~<Ճ"ՎrilwYM>,C kz}&W?H"J.lZMD$fQR$ƌYdŖ%8eIm.a&>ߞ8tqNQšP=LD/ O-U\N6Xn-u(@%%5hD$DK"BF !kX MCNuvvu#63Xr4nJ ;96򏭋m{ST_95_)(fKTj#dDʤ˛˸ ֮@j~pÑW?,3+ ܂z &80h9xJމRX< R!ag8=io#ȃs\S(t(Al(AFS&8R'Fi-N)F_I5@eIX*a4T*e"w,I+Ԋ4VR u͜)Vkg?YDA=8Wk$eR$Rb C. !똷杛:3s=MZI:?rM ?7؍Ts@bC4 P : LKCgo%] PO쮊.iAL$>!DS-(҆ ͝ѳʂ%'W6\ \8r`A>$tXKK <}[=HR 풶}ߥ3H{_%vl|ԫh| \_WP߳Dy`h8|ԞPA7pa I˽?QñӎN;GX%u9:GXa#U=(Wޗ|W(3\lpe7 ֙ ~R8*9 {ݲ2#]VdYh3]3VKTDDBݮv~v]nWv]v91KuN9RݻT.սKuRݻT.սKu)'h |v*:ȡ``)IHQ'Qs#x4%7Q?& `#].P`uwt$eв{N:JRzQ Qjh&L .Bd=/5\k班 ֚vﭴd(HLj~  eR> ^MA/*텺A^. E@m,Pt^h>GJĸePY+NBbRaḅ{%3Jp)q7R.w{2S8-l䝕,?v^&{^hA92aΙ+IS}o.1"&D`;7KZL#.=H0:!JD*QD솳J:$ZyV:*!22`2#c,XIDqkO5pq C*T+$*ЉV%I-LTm"jRUNN:FmZyrL{O_ '›>4\AN;!rEoAp82 tcbiI.ʋD2,VZDq} <'j|g|"vִM]M^pגbxJjLc=+ry>:S8P G? :.0W"`:A=`Zrl4q0-$ \ˆQ?aEVSjDZqz-MO<F~oMTbZ_e1M/pu/zL~r?+~jr*8AUe`O?6a@ cG@tL ;`sSw߮ @&g߂f*4p+bG\7p6|/#[&fi'7VI?ūR77׳Fu20 WЧ”J3\Ήpەӽ^dO~N~}f]1Y #R.s~пlלDh,u  $왉f~3?ii?: LkpʧS|4|[zߟ=9&Yxz&ͺo2^ѫ<'|28F5Q堩7KAޯ__O ތ+G7J;HC۳F~>p3"]߳-?&!۷o~moB$n_sY'j$=z 8f#3_Z|jȫ:̷ R`jT/0/fUnyϼٜW@)Zkm*Bz/bK9_?H/Lt}>4|rkI ؂X-6 ڊY8i*FxI 0Đ8``hG DtFRކtIRk T 3/u0@HHd<3m1$"),сz]Tr3u72p4n;k;"lK_LV'ĿOwnp!:MCZuM}y( q#cJ2'cM EI8%5^;0Hx Qp7@UA2>f~wmx|-Je'2$pT2p`64F~z|? ̓bwk\ge3 @ 3ۆNWYJJwrFۢX 1#PV0YYԼM}G,T(Ъ3Aܤޛ r4,j0H<#]#zʠ_ո'H _Lv UDn^^wyNIumjf$їA:;j]zN\׆?IrG4#MKlc^3ޚ!Q1U v8bŔZ/3&K]su'S~YȈ jZ, j%]01>( lCcHOq"ǡ=N(l's&10R(%`AP9e  O_`e]C]G1sQ0D]l|7rgh5~F5Cܱ>p[8ϋ BD `1ySê,hАtca&3ոf&h3vXl=d1MD)=sG40M lQNLI!J$$!D9,[ϨNvF:[ZCT 9܀E дYͲ=x[tiZ3OCZBi,8/ t&SbZj ݛ-Un5N$:= ׶ {h }➛MQzzwB[ي1Ěv1ũ,->$ yy˫o hTR (sKM&BWVpVmVɠ;j8w8OBM:vZ3{pB3Ǚb"2P$VVyg(PkSDM^Uv2`ӷbm0jPpCmxx!#UՊ!ug]A#)D&N4F*a rڮCN#5ſk wMk pΞWVDiƩph04P ..PSo*h4)B4c,1ccf%y`>D  (F*<АkH#TUܱ.ddp_|3 P%ѲΫ>~zS;a8Wk$eR$R;c C @B V 똷l?ޝnRu>ٯ@NTԼao_\-6j~u5RoPڝ)wL9f9PLkB@|0PRؤ*#`BLʁ!^/$$ Ot!F%AEҿ WQ.Du)UJut1zq^SMKPJ:ˌ*jB=K\`UI R/:d$txrH8\H.K%aT* nR}O* ^НN%3guY$eA* d}KyT6䇾G%⍮Q+g}뢒!)S%gݔr0NOt * ڜzn$K3Z˦$ks]`㚒ͅw1]jJ6gE-+ |k%]`?r(}^RGui$1滶>B_\Sz8ure5) zk\\wjSz-KRA7u1s)J"יH|uJ-h ;7 { 5p_sǽ1;#IAp2dllv؋B?-)RᐶF)-49~TWuU]]8!s}71>: `] KLwSeuLc<:Xr B]CZɜ7LqR@ysp/sz,ݩ{g ^cAv?pqL~o9'@쟿~rLG h'TG vXKtAı6Np(1LEx)qx>MjY!}b[K ʛ:6ws'cM3r ǷH.G BƏ9B]$(9]SP” JԱ-98;w@a]!=Bu9knׂ(Lj$2kVr##p_yj+Ɂ߭sϯ|J"x<4Vi~&_^N9ҒJ-c1˷xEӯ$9]@hI?)np0!]Znu 'eP>%oJ*̾0m4ǂ~jv}4?_O^g E-_ǡ 푺>z.{C#S^O͸^n!^ KߙQ7MPUyw@E1OSKvhe7Fdvx(NۥÇgOeqU]Xhٷ:ͨ`K'|zO0 f?y%)$<3@cĜE NVFOݘY7g}#]j|؈,L/c7!_<ߍ!zV_G"kJɬM\p KS>+oy>NHΡ_ƓX{1M[9Y%1qZcX[YM0 E^-.u n\6}xɉOׅAqW8wR6-z?Wwi5|~nN +cG7&M 3C]AZ7?IoR ^*2U[kkF?FI ~ROXaj ޽-K<ᓏJ&ȶ ⮷d߾PreK+9;o&cK/mPqx7~}n|HEG a(c}鰜8H<4 ĥ8i*'Xq1@`7#Lր0vYea#AfZ{1[,} * eNܜ?5;-4-.njPD@j0[)`_iugeJES㼩pFΝ;4kklѴ/a*Ln0>dHΨ([2 nry"66vZO0&׈V4X@o㫣Fr)jF_u0?ݓOX/Y{'xj )a;D]A-zg՘IDk1h𲐖.wSdfIDג7Qf%lpvvd> IPT\ <* 9N|0a0& 2,,VJ\|ܺ=k6 ۘs;^+1omq;JJjLv4/;5d>ZPG Vp).}Ԗx93Y{)CkݢԷ&m}[ VN['P^+Yrh$( b%ؔ9RnS3/"vNIEZʫw t0bQG"`"RSFDD b FрG +c"ҝ.ِD5_VuG>:齋:7T;_b* '?slph5Z`(_sglrD`p'!u⦷sy>Lf/{s2gé/!yVNQ΄\)[IXFPon,q]d xA T0BLTZxjҷ`SPȀ=$<'Oؾ#'"3 n3Gl)K&NJWF%9VE$TSǩAᨃ]HXG"\J%,44FuZϯ O-6u0ԵEZݾWNK4Ƈ&o-rD3w]Jk'sqCL8.% %XPMI`*'Qbh-&H7x.n4Ҷք´kHwD@4:&u.aQiƝVc  B' L;%I4X302p|6FOKX>RD81PZWC¹aZ1,)Ti9]>9`{dƽKVīՕq?şìm7%Qc[UդrQ* ET͠xWŅ{wXڳS1 I?\pV ؛bKL E {WV z¥]L0%P0! )Iv5W1t\ teAa],O&FY~^|vpf;#\q ^,hK #Q1}3`5qGaC&pkM鲭Z4I=?*>W6u/=JJZWlM_V2*>M.=B1E6bcgT !Э'XJ??~= +BM\&{x2oiJL<_R&ZH6UѺUn{*Sp]ɟ`ߟsLkX2ښczDnUˮVUSt%.Rw7qNK_O!I1ziTlߦM4qMm ?c%x@+) .hG(.j`9)Dv3 Opd*+Iڰ]Z|x'vS()iDV )&C0(xfAG7u:ԩl~Hmup3ah@@v'n.,}in,}.![NѴQnhtrdځ҉Br4 G {՛⋿N I;WذaAIƓ87MsbI[|$߄q-;e_ch%g7j:I~ŇG-OouzvLW ]X,|ZZo$s<<)y3ϲX&%WA:TЎRX`R0Ԏ%Uw пE,4Im΋>A@bCcs$0şfuM2ѕ?$ R\0 X9^Y[FH(򥖜.)zN=YcI"!ڞD+\D81h~s`7G EmJՑaYqGPꛢR.k%'RHD#i5-5ED;r]8+nﭿ^]_Ȇ힒ƗV7ݏn!yEswwE>$(bd9^i-ԷJ_2~L ^JIg#k\v3~,S #z!Uxd!R&Rlz""&ZH0<kXn,{ ylFLQ 7]#]_gRht 9e9BVzNݙ5ZjkkM D( F刊`)Aq P#9*Ky̑#h8ǺBώw_rc{eoJe{2\t]Dh{e ј1Hf%KCU42FWktkn]M"xj<'陔/V̙JԮěO\&Wn-Pt`nLRVOfR)litV^==st:f,&dJo>*>aɫT fW?վ3oʙ#j5W˃ds ^5wOnڝwc^B/Z }Gٷ}Z{9k^ǭ?ǹS2Q!S&L -<5{Y=;Ƌi;-z͝دG z ڪX=CEz {iDQҀHS18#QOI5f_M; _&]\Uofh,_bR>ڬ2sm-1 3Ḡז`A6]JbX{Jq"E!+>BrPI`\72׿SC*M 0E*#R"%1,( J3K`RD@i£+֊aIO|@1z8,-kEKMs$ `3Z! / b=(|9qQԨt6ܙѝrxr1Ÿ'_VoU̖lF06z]\3N`8v\ܪoB5+qkWb|Juːe_fY>0 pп:Xw8PvUV:9dݪugΚk00pԷy c'.>ŝaytR*~MQ1-:~{;{KM%xr?zI %$|0%']^qTgyޠz7@$&y_>ߞ?~99&;[2.e6mAm6b/eSKzb,M=u ֥#׬u5B,DO"&zb:tWr5>ڦ>Gl$Ju5h@"%E@FE$u} S%$9j6LKVC@;U8OQ&FY)K"dBL@b`D[M{ȥNƷ7[om8S Ĝmfڙ趶T|`jIT[{X(F,-:E.-ѫ/Ii<~ix77Ÿ#/g=("(RTd#1]ftPn#\2b|6hmPjU9Vb\ qY~"l7)˴4u`<)) S"!@% K$#MM-,|[]z,"[>[AZ0b}6jR_C j3&].. %fK#˺ZxaX\D7^M$JmI"igopR}dM6$mqA8-;$.o9-]՜~*[h=@J|ޯ%4?Y;>VNxKUqI|N 6*AId0oFՌ)$&/Ѥ*coXfl2œqYUiFXM4s r{Ղgȴq68i7_ O;jа >x?Z gIF)r\$:?=w/H9Ku2X04L`D$ 6|336nU纓=/uU2VHG;J*m ly4"ȩKd`QMq v;:%noݠ?SPX}0u\}Z+3! 1RS@-IfT@:$쏤DhĐ&i-6Bh 4EPo2; os>f oehԾn<,˹ ΀C>}W G~/f3xIJyǙ3͇01Šl[M7EǸ.h;A LqY̘C:,NcʠZlL*ɏL;E>gG:KPTX*8ʝ7\ 3f4j|ԷHbaJ솮G2W*SX4wq:SH۾/ Tj4qaø$ݿ#r\>bG_>?WAMaej1ixzhWV/^ -X[4i5Lgs7쾟􌳾D5hY5t jJ9FWPQv^ (SA%{u ӏaM'{'JW-i@9} L>=Y7bW$((}N̔6T5[I5~TƯ 0b_ߧ7Y!HMd-O֪qUnգ*96PQ6GE?^ 90fʍ"VFudXDcPhZ H#`{ngSӊ< ^?[* m&jiK]*{Q!ʮ)Sp m8~Ogp6ZM^w ̠uVY._uֹ'o'g'F#IP2ɴs.[';d_$^4CEX9 1+92QDAw'AKmT*%U3N;k-H? ^XQ-$ LⰗB @ n T_#Puryq gpF"|'Z&MZu)(\1郋q,7D=*9280k dBlI+J40X|ZP i<mL3xI9,n}oV($$9:=Bg3:p (q)/R2ԃ`gLm@ mn+_&$c'8 qHOq3SՌŤ#fx ڀh=FU)f46XP9 yFǟ`ibiiAvz3ɤD\|#: G T!nB`V#@ @]'Zr@f~/< T4cM%$@tY$"R:" `|"88f0 TYGb9AQN"7{@ Ҙ&(쑋^i8XG!1M1)DXҠu0ISKeKi-KJ;j QPvv(XZ(\TË{ .xۛ ̤;{=s1\> }t]e`a}&b\eR1EY5;˜lo_UyD~k=5xcYe[IjT}jVs_U_smiIӢ4gn @\(cl@@"JQPͬ$C~gzI_)jW*<靝Yifh_efCݿ~#( l R*"2.!tZ' XIǦQ;a:qԸ׺;ڷG0oў(nfމSe)N}E@qfF]g}|ю)Esi?YYF,B0#B &R2""&ZH0<Rn9̺!2wׇ3鋢Ar&$.\0:b{F;jWa{M^ɯCӴNcF1Zle4wFm6*GT K gSELtvsF8U ,B빗U&L Y6NyM}8}): D51ɵQ[v9 匨+UƽUr9sp~GTVaFP}mJ0%"ޏfv9`zEz {iDQҀHS18#QOIXHXG"\J :=&y$lViǬQFYJy>%ukYXMgO‚jq统f2p9ֲlqx?Ec[@}z_p|ox-+=E\3 3Ḡז`A6ݚJb@X{KqbE!3=!r$0Ί}vַJ@HwD@4:&eoaQiƝVc  0B'Lx{.?;=tUPbV}jf݇#Fc )x ^-!\ pj>*C*F KyG'!a:ZiF ,V# !B,fLW) L1RurE\'ϣA**74Z)jINMl0cFA7lrmϒwV;.4ogaoJ1@d>lUM*J=E\O\d)6#o&ds{Vj5z4 {kYgv6 D)UXX+Pa 0lqg"Cx Cϟ?UwE-6`MHtHi\(`r^2QP0{؀Nء/;S"ePZnsqџޭɱByrZukKKS`K?g 'SR#Jʚ歳y21T[ze`a2UeܚQqoDh0`$ǣC㘃t}zM6UQtMjIs '.u=+B6Y:6quS .[J?gξ~ 2V>|ˌ${x2?%' JD<%=OnK!J%tVRoYJjQ8sk`!ycӇ/?u>^~{3S hCc0^=4UlIoR!b\w#D"X#}Mt76U?ZEsoNV|]MpA;DwQ3ȱH yd60#SֆtKv3(k)JO@Ă҈4qQ"+qI ZȔZ!< h.uZ)tvw 6>MОfD1Onk+ %!W;gSQnXZ[t\ڀ:/P-Nu8L0˧a;kHE>ג%<ƃ&.}%k7<ɨ]$%;j$s-2|c\DiN4Wx&!{@>g ^lX>d6?%b/$ٻ 9v5q* y11|6 {Kgy; RU@{ٿCʓa~6'V_s*0k@w^h>|,r韾tx/MW7]Z&67_# |ws-/{Ʀo%յ?2|0KgY7ߜ%`"{&CDT)S菤4NXBXh67Iw ;4-~dJւs4(eQwe;cvy:\Ӭ;mcWf1O*Hz_ՆItפ%Rۆ{u'ْmK<=o x0O؝rtʭeI/[/cnKhq9AX_ߑ suˉ_ҚI7.PHx3jg>O(RFXp4.ˁ^)}ikI>ZKбay\̷m!;w"Gauהp T{ \i$5J.YV=03pQE_rWނ]9̔G EέRȰ, Ƹ#(UJ))$_'}2v^S{eCoGEmt~ޓ}+@| 7:Py"P)húC+[{.jޤJmP&qxovujΉYDD(zs[+.8\;֣i>tH}v  r:gW!8o׻\]jpQDlՏJ*aVY] q*hf1ɵ,g܄`ri(x½,q'L#)gxy58r-|_ae/k)KޓY!%tHR+eZ:U$"﵌FMFSxYHKDkP:o]dǐuX AEQ|0aX** 2,,VJ\t(vh:y(4Z7;-S7˝%ʞƢ&ۧJ.V|d>ZPG Vp).}Ԗx"pf"pa+^C]HFH#3%sXC@]x(Ny_bڝ |Lxt.P!`V0/2ZDJGt`ZD"rRq)9Iwa iU3 L1(hL> 'H >r iÞX:xV!!: Sz#YobA% 9Ij$f.c Fˎ:N8혜9L#?gsi[ٯ?1JW?06£ jXΈ 2QZha-ywM.K^;}0HzD]e0%|{wox?}P3P+aC^EQ)3HS18#QOIHXG"\JlB Oc*5(K)µio8z*tRޑIȿxN4hڇ)0YS  H785!9CrTatUJBow9LF4}26evKi߄ΔH٢|m?ԟõ..ۻ ʎ âjC--WO)9_,mNGFi:'#|,MuUO~.|M_|\f&#X;̭:[vHƣ_g2G ޕq$2?IC@𐕽9~ !SfL I"U QrDIIvdGuꮮ B5#qkGb|HWuÐafYހ0 (p<%x>f7Y8DuȺQ֪A?:N7 a*#i`ȥo!,3|aySff߲o>8/zcB0>_Lͤt ?%'JL_#xJb F6ii)i◎D{r$,u[ \+ƹ`1X>]HߦS)0gmI 9ZrxЄ%*yeZydtd{וxhVaSGYءXcXX0w5Z Ӏ4@./$j# LrJ͙"<+4;˜pԳ*ufkheu(i!@r'mp^?@с$q87}K?[X^r+ln%9Y;< 2=]PJL]t>v#ls 5h^"%#RXm;w$GB ƽ.E.xosop_鱕->$( b99~";зB =VS~-CEX9 1HXȐ{D>u^j-jaR/1",Da`")#"b1h#-1#ZNΑ5. ?}~D*>*,w[Mnl5 t^E}٦yznnκu%U7gvZpJbE,0x !Ibjh@U v:%_Ph4BkM p( F刊`)A8A Utȴ,MgY@G$=U}dՇkk}}2__yG,TR%Bm{o+ ՅszX߲"#|(tOP:ӦS[)3c[=-u )3taT4iqsQAϓ$/YZ,Wu.<48N&{giJ ^0 /r]70i))$2w%g VMz} aeX2 { Y8Y^F6{b!: D48>oepj2>m4%hJR~lOަg{=+3dgmX)gʔ[˪.$H0fUsX>͚C3 t% ЈT,vs~K0ÏUj)kVI>U!M(> ' ro3jьR?N.q]y~/ @wFElӢ^Dj$} ))Se㝄I{̎KC6jHC#`LW4vtF,ūt\|[ MNdB۲=m{:Gc*`|Lf :nwz˷/hnZO/>UZ[?'B4g#9 s4TE#cşw4\F=[ L ^?b|Qθ  j x$ ) 2L-;2b76MVHKDѫMayUĞ^e[3VfvCוnu{\;}Eڤp]^?)L"E럐BZONgm6TF,:9+nm:yxsIKäA -C:o!57yp_֟P=O^Zt3֜.ڴiYBnC/xPj"s"O}|x;X3g!q'6^H#yu]ntPn#\2b;OԷ^=ųJ4˗ˤ^bvKWP%3z_SߴE{:8rCzX$^$P}3m)7Ts\P5(S"MۭzţiW=u{eӧ5o)s]M'7Rnd =!CX{))'S"! %K9 Q Ƒ6&N@~'p{px[[!ZbqkiV5E nk3S5Qઓz,w'+$N0'Ku'5ruvdB1?d/5L5,zav)4Զasnz$[m$U]UG[:4vq1-RF"|~yN e>7mrl؞W:ewӄ{R6 !P1Πo1WEK6"y ]%I:@A/.VhJg >FD-S(zXh)ϑ,0EzDjoUeƝuhUǮ u5Z"lzQ/y )~ujP{{{gSV~ߎa[3*gRbrSwTUC@;= ye{x .\{$v;lT@0AiDUHFNS:,2)[+ t(y+xń[<o#r^z7:4_ነQqTHDK/uf; C  =GZM],69W 6b f 7 N-j4P9:;8:-^66҃264OZ[]Sxx 菁˧yå O=cF@E [ZIoꎎb*WΞ8//C6t!'LG >GYvȅ]R޼| ?jΐ\袎v_Iۻ-l{c T UU$->-*-g,p^Wm5%A@=>Bⷥ|UbYJapy],Xk]3 Kl4Mmlp~~d}N)#D Ʌ_7x֬#|?zpȯS T{ ~%H(jïɑ&[٦) HQ~G6&p>3QmGs+:2,"1rbrX+` hl% M82X ;lѡ'_ %g|"%ƾ7:0}$047^@a66of`p拢BP`d1£ #Hha&^l<) xBпKcM/JGQaUeKmzI6/CRiFˏ CaҾW`)N*T46+6 ƥ:ׯJW7:3R! O@Q|M(ϴzE]}f]ve]ncJlgXegvv;[[tɍ*nmq!BuVTofpx}wYMXY$3/1GO~=){?'^Xa4,˙w<7!\/=> nO?Dhx(߼0xahi@e+m9Ō QǼP9#'@KɸlMR]/I&$ČHHGFGF h(_q]</`'KBcј ` REM{ z%RDK  ߑ=}cq(!9U@s:Aƹ"`Î$d* `7zA+F+ClDRᨢ :D푰"cpM.%9o1vށ@zRCPO,p{kܑp?U8X 7R9$NP}@{*h,(J#`k U b,a#%^$Ub-ҷ6h3T֫j3uUɚӔ(ǫx-*8WAB*Ԟ+eo859"PўrS@{SpKo<p9*t'hp f YkgE郬Ek4Q\͢g147jsME\v5kd #W׮7J U-,7E|0aXrA{+%Rm7lZ dnj0}įKP~x=Y_ܘŶ5C?cj %ZbwR>RL@SNq飶ěṲ qX2ڰuՓjY_/ l5[E#&l8$ X ޕ67c[RW~m= x^{9xKowWP÷s>hv9<|zui)'I x"I :e{C JH&gE/-t ̈́RH#ଶq. Pڻ M/-M_O3wy32Ϥ+u|Iem0/>ޏkx!w2i e em˃I@z!.ǨAT1ΉBR f_+#"GH:>n7ԅR3L\ϔL) j\MB7ZQ#EFOO3{*?{q|јYې < {-@3Ur1IbxrёQ?nDk6NJ2HL.Kb N9sDv)2YHe9EՌㆵHFPHg`F`y #ۋ82!Zq$;lrVL[-N{?7l|'%n=\~krhRAl,x\a~ok; &?ܞ[ A0J>˓haކ"m_.FE _ܒ+%{Iqo)7q`gkIE7ZOHt6fGf :RaY92ޭEk5ݻZZE@eG2..Ʌo \L\Z3*riu%|Quyެy61⟵O~?nz=~mc_4_]]1w:{$@HHhH t4 kFaV7d|̻h8Øt⤖2{"FmZlN C\HX u4SLMbozظ7]Ale5ATwޯw_'>`|[T pb(0+=ܣf^|\T_g̎JΖ>52ɟsOH헟P 4\i Fx x /CSUjho14bhVt o1.%0_5P!}19Ҍ :eM@׾6S?ZGO˦gYJ Vl΁}4G"ep|2`xHH&Y>$p-&НֆRk'IvJ!]3J#1'E.H"*8C$p v:E.u&qzwBmkzMHK3JG3 $տܫqvkQ숎LFGEg-_: %{Fj5IebyK Fң;N:% +T"I'-]=|\1n( q #@:/ۢGTuBZ~+\W:ն|uktσ@תVIZ"-vAUBv)lRKW0+(a#bZsXU:V?ā ۀXDtJj 6 ́Srv.1>۟aG% %2ߓ;R"z;У``[_#WWjxoyf4nĩh7@xo;:yU)OS,;P gͥ|i15'T15@@y'szF*p$FhSa(Y4鲥0ǃ@$S6q`L&ʌdXGAR[)Vl:-=8v<*=;q~>ˮtywY{m>|Wes`ywΡrujn:*zJ#<^FFC4R9 N0[mMr.kt*g"S%Ola.6g>Fl~'"mvbǯY4#i .=zP@sdҰUT2**A!*MVNs4Nsp%Ys\0ϡ/I5rY%AP]gWD.\%0i?TSQmNzO`tuč;=_B}>}zcaInOū`ϿYZjC y3nX~__ωF֦exuNp)2YM7'7TA҇*lρӠɵ5gΤ \b% zVodA;͖'u-jͭ3֬;5oXZTot|y:|Y3pfǘc174CȌ1+3c?0OԴ.H*!v(7'ҰOpq~m3..Gwʝ)aNl8[k,˳,Twc{z¾a^v xJs﵅Uqk3 ۄssɌL极Y<췖~t.L;eҊΓ(N:\5߹ M݂Mig@935nǼ 'l?_v%A 5GB*T< ŌߚR>wI pY4`ϑ1'OR1xV٥uǗ]zY>ϪvcH$5ˤR.>?1 J+W3[IJߢsIN]?gxeN*aB zN=uZ\dD < (QI\L'%$ox&G[- vB0/8ϭCRxaR)y3*HOϯ_پh[d[5y×zk  1J8p%fRę Pր2Ah+e $EPVg<ى'*r+Xz+s+ݫ/|ub O\'⺗ JDW4?{r<#\=Tu@T|18wH"T'8ҧtA<RAlEDmFd|Is][&uP҉"ڗVsKMg?6Jr,W@K͝Uj'YDRBrdRuH2Bd 6ky"(QheS1`)kLH8l&Qf$e?&w9mtf&Yw-+l98}Q7w> Cޢ煖!+͞7ܿ64U9|CSe-uV,6gM7e%o{1G[knE7?n +"j?V86j2K_yřIWFB~8=s}z;3f\^z]Gq+jedbɓ$4@t1Ѡ$Dt"*mx!k%B$BX&"PrO#Wi㹳j XwA!lbhm:[t~2V|ݳZN7Ϥ+%O7qzB1#jO7iJ`O0__uR[Wsmk!Rb@3U+rҫe:ӚQ!vs2gx1>tӴug<cmO7oيHn|2쟽A..'k ~mDpo8x iIp8G4 kFa=2ma-p*S|8zt?fSo外2~K6#x$,|>Q S'.=˱SNksATw o]osZj}^>2B'~k44OF4+Wߏg̚WHbo0yKdw?}o~x27xe}4#?!z <Ƿn?4Um ͆ͷb\mrø8!D6 CvČr <7لȞk}P+I, ZQ6h`xH&Y.I-w{.Js[fҥ)N:$`_+t*0qJ PLh%1pq!iũ3=ɷ1S C̀B摷vÓNgk#ƈΑK+pU_:N"4zBM򑅐A*#D ~֔:1DqMRLy:&)F;?p?}z@-¦U'tISȡCKB !Ja0/Aukuj!^["A2V uETX_%ƈ`QA ?P=-XD[C#8x)!&J47Jh)<9):A*t`X.cgRm?RNΩu\Agτ먂No"ײwـٴj02hSs__->!Ts3Id`;{c4.$`7]El _ޑ\'&^wl 2^w_~o9Otwη.^^lYwsF IHEDF)h7Lq PLFDg З$dWLtdTNt4kA0<{w01, |/PC]8/ x(z~pGYe^T6F[)TI `z*x)s6xL;.6sxOf7#&,F;T4>-^m~}e"G?/V ii%ݝ$ 8Zs9Ws9Ws9Ws9Ws9-6qMYqE8s8s8s8EZ2S2*s\2*s\2F }0k|wJhW7beEDť2V#DT.j[9"*.qp(8A L4,J61ϸ!)Hiu`T )(RdTV!VZКRO g 4 7K3&H}*Dfv-C֦E٪%^&nH\ghnCQM)=uNv^ñ## pOm[PQmz~@@ qdgUߏfu#n휦KDDAOYG\Z+T>}E %uD!1M4*&8uP#e`TYAT ]]$PR߿~Ӯ2ˉBHGQY&(YSy#!&҉"ڷ̶5,/njGM>WMm;4ɮi鉤o&Q2E۷B丮z.7{K fNL*WHjBHΑ+ (CZcƠS62_&[9 w=*6[(iLlJݎF~j[_ѸȀIdq8ǀeRyɝA1TBlOIȍL>&nC,ESⰜ"Ô"Bf xzg-2!~a]H" :*kGcǾ!3"GjT,eFryͣ*9y2*}5 N+vHN{vP|R½Qa>Ӊ^-l`gF7݇+\ץdxLJۤ+# `jME sO9a{3z2*B ^GsXV)V&O TK :A"Iԁ:ey!U,΃L(Em1+\8NϝV;eL  elk2FbN'K<{wLz£s{g7!2^i"ļ@۰-;ri ˤ \z!1f 1*!,9@Grig[)]v! @E_f@ϔL)sc5'mF+JrDQ+'1cMPHu^a{vhoi196nC2$C,y=EE3UނAkT~3ת4Y׿6$\gJp˕CnaN{`*ك1UEb'^ί7ެ Legn̗8W׋~nWDx2p;/Ӓ[N(_1Z 7n!̪|a<)'4iGwDmo-J4JVA64V-+"0ZGR̥oMb>dqRV!˅ҋcvXqWG䆿_KNm( gJs4ݔCס^WRoIb נ$?"yϏ//_x|7߿}gx.R6¦.h.hсGMCihn*֢i騫 o.]_,u7\BH1riT|I\ukz}M|Y6>\'l$h1h@"%El#"E.LFK{%/1׎VE)GI rM,H%RX/DȠLe`D[MɡNcAיL<ݲ']}Α'[(72 :4mU}a@91[*=?4Wc+l &[gQtjs3|%gKxM}؜`8xˤ[꾞dzGɼ5y{tt +{[RRI : #V0#)6gF\D(iw1 -u$gGY=jt v=Z]hvbkuցZ@ $aEnF0/aј_w9VD)  KrY{}@u!:]PHNFʫQ`Ժ8pjA煛{S8l(@ЮO/-6o=;Уk=qנe=[{ߛp37gT0X46vΥ(l|/0cEG7kK"ǚ!&)= u(QZ٠ӻKMQ%QݒF,B0#B &Rh2""&ZH0<R8ҭYlhB6h}t6\u&7C2ss;^z_>ۜu4Ko8`9Ŋj!Y`*x !Ibjh@S v:%_Ph4BkM p( F刊`)a-qPX;n ވ=M&tUaWO-mfbǯٮ-Y3YbOըˑMҭ!sɭ"4x 4{˱{H-Gicoa3a o#b[B37AbFaPc{;pw{;O _6tYg6jn_uwY;k} 奯|j^u*Πńta:ŎJev4YV%s(Y v>*1 5.ktS~md*Qqe{x6-Y.6!jLKKo$~ils!7<^,5|F.xQ`cTכ^oRod =!&8щ[CRFDpC@K,N,xGI4 GЛ A'=0wݎ\[ZPbGɛU5eP kA3{pfYڠW+V/[~`(e mw.7m]x}';mXNWWT1*wLܚ _^ba!1_Y[b}:{zĽ#XacZH$ZQJg >FD-S(z*w]``Rg$ڥϧ]z^>aѯUG\f) ǡ}}e==C y`r؜짏wcd2gBaSV~OUZݯ*~"{x8)IjQ܏\=8Fx^} =/A^M=O Dta!1\8SrQ!W&\ -<5{oq9g8<{d)|Zvp|h3-pI֝Ez {iD0<*"i2:N}8G<%=n'd |G@ =3>htw'd"gqA-!Y-z|Jb ɗ@KqRE!I_Y ԣdGm_H=b4@ºΕP ֊aIb>=M+,D 0饶69YFzBj1Tx+7E) ;9MSWn)thpJ V ZRǴSA"x QDf:ἵmϒ)y5S?γ_<ҕ8^Fx\0F0xV1ht$JyZ4H͜g16cOS*Wi*5ws$ճ?/@l͐Lna@y")>+)@{;3EnxsF:>+E-6`M@ZX.*lbew]e0.p61`ef<)D^կU|0Vʱ`ByyZ&׆L)9_m}0Q *-'5,]u/>y_~v~]fm`L/egn̗8W׋~nWDx2p;/ yހ-'wƯB -q|sK7M͐fonfUX>0`ŴOƣz67 %[%h{ ZmN'VtfjI s_&3-B6YZ6 gšJZVZB[.De^dn¥xpz J3,޼}.7|sk2.o5uAgwt@#`ݚj=DFVR.s\ЎĽ!Q] 6r,R$ 6E*! To$^]R}C( Ăш4qQ"+qI ZH },(#hCNu*T3xR2ToYߓHzsyg'ʍqkNqpU%4/4.ekU{#Cs5)U q.`9gQ I{ɱ6$Qs-9]c麙dQ:]Xzգ(kk tFىͶGZAZq*缪OiD2< ?nj2TQ%qc c!CE4J+tzy J$u7wY&dyt @(.y =w>7{Mޗ66 ƒ*3N;k-wNE@yZH2a/c;$ B @ vN -&P8AsglrD`8A (UttugoY߁&yldkqG;dMQ[޲zf05L{am$VLz1AhF9&h\( N AHRVa S띱VkD佖C iwv֝bm|vW6^]Y2wŋ5D_9nG5 yvyH_s}Y*o!`iԡW˽hTx+rQ)m4s)v_z%lP+[o6gJRҹmF3^)X"Nˍ07ҝ[b{KǴԟԿZ꫾o|HƍUZΠJ5{f(l3_ll^) I>ν?|^=:,~Жh;)1*ZM7=_qތ{=?l NG߽fn^1{^oɎv.ָUUeJ{ȑ_y#H 4;Lcηڲm/c:ʕ 2xX|L^9y%//få,]eyČ,3_w$ܾsn؁GV;K@ ,*{mr52A{3yu3\Ϙ[e̗j l$|3n}֝PFnB@'/Z:oj')ZiPNr--?Q\ڏ4ODS=TO4MDS=TO4MDS=TO4MaSnTO4e^NDS=TO4)NDS=dcSTO4YdiS=TO4MDSNȞYޕt6 M'oogkb*qʧٗ /gYy8Iɵ/K9B0Yy82| (8 *6H~ʎz7嶠1rS:Oo* o8=:;= v3;*P/o#}r=9sf9 g!g;rҮäT7B2+ŒviĹB?y|oO8N8А_ B]DjY8qe-'jZ8waq7:blZ=Z||GOGfQeiNk{k7*WEpm~>Gu8*x˩[VkTur/YWZHR5ғ mI(1i]a5X|; XD0[HE%͉#ST-ƃ5)[VR%.IpUYٳeCj~ N |vy%ķ^ vw:To۵)m"9u V:ͶZJ!c,Y$0^[GsdWSBE]jKS#mRZ Ep7)p5+'ŵ|I:jwgoYgggu$o܋װX-NN) bZ{0ϚՁ|ҁ9*U`V ]bRCtFJkpubs3eE9P ߃L.G)E.֓6dB\tA>fO,wmXcO=@^^Eo/wZ;EI7uG1'ՆƿUW/߮W}{L|MɜǻgsqC\9ukzŻo ǝo\y>47~ٻuǼU{W908ݤjo۞n䶶]Ibao)=p̫nc88Rtvaddhg(=mK\%lɤGR>9:=;ʰVnq%s;c2bSq5,o>O<53;j?knȆ>WM%()h s02(mdhɚj9HdЄS̋S+ wǗ~p4ìoCe/BTo~\o~|kq)XULBxYmTFɉ*;mhdF4^ x6Vx]z>輊!EIL IkPJb*eZEɒنtû<6(A)OTC:ڦɕV?fK8|[T"JFV9lOi{CnS{M߫ӧS 1]՘U#@dit2ő7ER(BTaP- uKDRVB1IcntLaR$ bºl;vh2'H% Yf>*!danT`&M*C; JXa,!Y ^4keW.5I+6`;op"u7btO)>! vTh&+/c0c$lX8ȅbo U%D!R1~NdU{Z8e J<@]7Jz{mփY)1!8 GD/ EQkSl.x'd/2 {Vq14.LCR [ ^i]Ias3,HؐG.3+ !D!: h{($]..9ʊa I%C8 Dc²! "bFe׬|9+` ̡mš lguGlZ)RXʃmnS1 Ƣ1QT 16l mc7WVaqJ}ufۄ}=VdeZuUTIXI$# lhBiF+Ih@UXu_D@B]^ QfBlTzbF0g\AS@h5 b({ ۣF~-k0M\1%W LB.%S.% qa ne FȤ}@8FVJ (ޔ9b:!f7*,X TQיm`ZC1 $)˒'[#"ыT Bb~)RNKF!BiSX(Ёx+.! ~,r)P0d3:wA˅t+if1B/EUN1׶$%|y(aD0,tg?&trߤksPA6'X=,]ӂw-Z`IxƋ _‘@}&2UStt)\9h8􀁩p1-ELpE] k $bDL˄ȫ i⨨(Lt~\XrxG}@[) / Udef[#&@܊`m]s],TGW'VCWawFR~j[&sJ^VAIt'CDžUxmnU>7vLj`$ ,kK0X(;KBf| ]yH!p[.p /#W ?{Wȑd /źļx؁ݍi4򔸒HY00}#(EQdQZ-XyDċȌA] H; (FXXBNnCDh=c, 5ADz5P: NwaBۂ 7,`:kЈ`Rpd F!@g.,xKS#`&yYd( &:?ALbwo DDh I5B!NaX]^g>Xځ WpטÁ x3 ;,PE Y5Vv]FTz:!0*@% 5=BR.[#[Fkl|z/ +Np2ΥY˦5SߵiVyie!C!Eb BJGa0*րbvg :] c "&Xe0: )#hn`SeR c͑!lW{d:RQ:k#k$?m. LO)%:L˭7kpWnB+8FYH.n Dz2xD[lW[+n!n 6G\Ay8^8j>,#1Y}n6gk7Fy?X7o*n4;FHGqt Q] >M7]\8>(PCVfOؤ~~Sl0F9 t:݌R7 4Aǰ{?(ZZlʖ^E[3 ]+Y^,Ҧ<5 FGh\ی`sfe_٪`[wն:7}#YHoިllg+_*K? CԼV_qz<Ļ;6%K7BWz0Fi|0+u_*tJ}?`8$##ۏOG.sނ $GoMw]/7mWMC{e6hZtـoЮ4]vݿn1X/RcBj7b ;Ҏ <5o 7o-nMn]^{$d9fef:ya)[<dFEr%Y#/ 驳 sR5!מ":a}m-k hdNbYk+X}ȞZR Rɦ8O/w\Wg?6ٚh`@wn[;Qnv9pŎr+sN# \E!M9M ʗ- QFKXֵ7ĖNoKDm;T<[աdv myTi$ˀ[U `Y,ʤ#Wx7 @/])ځ"ogi|7/UcZkNstls.\k_kR:Kךk͔R%!лl)W37%ܘt m7?]&G&<81nR#9{`_6iO qܟn9UOc}&W祃1,*%YժHO2-6]`cÆfWNmоl0JfX#ݹM.JVᏽ-"vb/l;ͽG!4\O70 +|׻mwcmS?óe_ũܜ8LnEm9OnE3 ?PƸ57.9f<\E|~kTwJi+Qx) 9m&ETXPK1tZbnw7r>ä?&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a&a2Ljǥ{I% /ĭً))i8rTRK 7 z28ojY30L%y=LG0PS <ϋaUMǬuPD\I:3'1ic ߩ_/8ɓKCRERmgӬl30SoT;T.| |y*U]UyUlw%'H,oU1M('vdH<ɷbf1_n]aDRޙT*%EX3Δ+Q|++[/~! ųAOca  _mj_.|ZXө᧖ ?!|Lzy8zoII|y'& mѫ3+)+MbtZZޫ !R̓ #vaPg|PG,O'KJogVt4TwO" M$UZ3rތ,vaYX^#̝,=w~f=/˫,R0VipffO=G-x<6SxLq t뚆x`GnҒF!@'E_28w<Ӈ(M18Oy;Jt}z9=.NqSqa8_45}Q5ߙZ=^seqfIaU7~@Vw%fRHd2k+R5cN0T5>ۺ~`B, 7{|u1sO6\ӳI3 ,iNo'pwxBÜόlGa۪KpZf6:߸Ovzn=v CzIo^oWM _hX-Im?BGsJf*NR*H+ÙDGW sngihNoܳN+s8KY%<ԶN)dA$u rahcTQXuH6 ; Vؙ-ǃ#c^TM{<|Sc8SwYHm[K5hnd\SIǚe!D,ԏ)Fh)Fyg[]Tfi`D,ˏ<]zUJۄ5k-7Ј<#iuUTJ$]B:*sU**IY6߷mQztgڦJxPi^B)?!_Qd LJydz .AKLT$72ntGεoeu4n*-ʇknKk4{x]g )7)ߣ^amܼ G-9d?.V+ewnmw,B'(C`;L>B"&~^bI/eK7el'LP=p0YGK=^'"D&U|'Jq{Lr\1[؞(U\ $.R32& ιd/ L)!DN{6,J_f>㉃M `BcTؔly}_AQHJmز]z]zG.u @GSLE+­<j{n'N%+sh @J|*T!&=: \MjjԾQMPbQ}uTTEM0Ǜi<ןד9Lያ! +̰ 6'Lnp?oNXa4䩯Oμ x)Qp }~i `DMB##qރSԐ FIF'82 t/ {)Ht@?չG`XY O?AII_$X?, dҐHff߄OɛNlBau,'6Z|bҧ༐myx!ʖZ9?v 254a1F1sɭ`)$Ťwͻh&ȥ@:hg #Ly@ lQ).8\S/h1@68R$#Z{1[,Au4VَAeB#hP2g`[OӁp|S^w:|؉r/rv󁐭±w`Ɓ)s}ҎxV3׊cusoc5sM-&>HnW-\/>n|yx}y`t)JzOv +S0"QHACaj3jX$"﵌FMFSJzJcUR?*ީc*-(#W~ QǼP9# I0e <* s~]m1I(KaaR"6fg=IxIN\5~vy/V:ÓVZ{Rie4\mV_"Mn)P,g* j-޿(o:sz)gJF(% | N9ťoW12 a93Tל3psNI?^`xVvz^PKxųiѶO[vN.j@n᠘.Iɜ1 j-$q* %^F^ґ0хT]1 6Zz>.(,r! *AtbPA1 q;.#p+1]P=_Oc=  #$$u4pFBb%#1)DX6,(Ij$f.Z=S6))l DFGdQkpu[uH@X-cک iP QD:Ɖuh}\Tָɫp4.$!LWfg|'1z |g*[]*JIw9~cvnF4jt "'\IAUϪ?8W@Oق!|_E*cǎC 9]3)z;2E.樘8 C@tsiC(XR)IR3 : i]o4aPU4l})tǟMviJ&R(_|坒>9A(?#(Xr)E0G?@NUi;TΪKWo՝ﮦw`z̅:[ovIƣew~[PJ֕_E2uۗG@>%Lx4i6]sp1x8D^yȶU۞5v}"bFKS_!,c|quʱtRf*Q9-~7 #nAC G>W2أ䢪9(K ?C%Gq0ngnpyD`}}Jl$J5h@"%E@FE$S%$9ꍤ64%/>E)FI xM,H%RX/DȠLy@b`D[MɡNWR3xTe|*أ=X(F,-F$㑅H1,L0eDDL ` Xy$RD;Κ 3&ָDכВ+Ǘu .6a,ۡ3ך9-ӯy* ƒ*sigmJq**ěÅd)C28C Rnsut"h@ ;6`#*%X jCYOK@ǩ#]lK;Xꛡjܦ#lG00s|=k}x K,zŽDJCE9"9&`o(I.AiG> ڑH9fefq|8>;K qA-% %3ld͊!E\rн5qvދ;7.=mg8-n3n|yP5Җ9W{C92:;/`xYO7=%`"{&CDѩYNX*%0ln d@;g-0.@suVX=M"vn^/F-3SR)s es)wLX=Q<3Q,i廂7^]$J ڵ<idGҶ ht c9L{.O\nset[YGRbIM1 9H v2L{T#dnaبX31{V%[ɍr-TZ޳ 53BUS$847+jq`9Ńq[UiNX' NzWs Ղgpށr`ZTџ8lx;rtl#_Aevv:.FAK{aȞyU0aeQ_YݸH"ٸReU7?EP>zBM򑅐A*#B193)﬏!%D̆vCL|BkF/0}t_ӢtꨃY 9iu ?La> djNdp>Z`^s"C)X9 ֜0; 9_=>XD; DP'Jq9#cC}1j(C8B`nz}N^~g5bo;i8 wjbc"g~8~EՂm2z$,}mo \B.3)u}jNQߔ' -&pJZXQ~2dvQ򓡤(?ߢÙY0}J"NO+8%$DSqJ"NI)8%gW簳s`.8)8%$DSqJ"[T%$DSqJ"NI)8%(0}^͠f-1ڄx!G^7ٻ8ǯ2aQSB#&: #uoߑ//: KΙnRKl)K+탬Fh:NE =gymʣ+nX/ra mĊ9Nog(ۙɑhnv02n8m ?vJ+!>۰"X1Qm$""jYKaC]Dây "^RցQ-\H"Z5"ur)\Y$]T' DsQ"N;l=Ho$a3sᎽ(߇]fboÆᏛ-F~<v 3ǷyʆTjs/-hENZ(/* *phE`S$F' q Q,i`~/Q:|>@Ϸ xD4 ]s$>Jqr:r%&ruPF !(Obm݌nUR߿~mh9wHh*6NpOx!H'4$kטа6Q4I2>9,h)D<!Ÿ@Kt"8mmEaMxs xKK-i鑴폇*Q2Qoir\n吓wF厦"OL*oN2L2YX%s) 2D%p|Od`4K3*V)ɑR8e(z^ q,tJ`sh*sPH-[gdlQKi5S1(m|7hEv\!Bw~Oˑ|P^oG;)Jx lwqKpb& &10L>&nÎy8iȁGaIHeQUiG'E:z2SKx"Bn {+U/W 4hzwΡQujn:=Db[ o"9P*F'᭶&H]Ҟ-U e"S%Ol89x8N+2NJqum&p]Q,YR0ʬRj- JmA!~6Hm(.pz1C ޳qmgtsC)+Enh)莫P)e^D+UZK]H7cmҕFLUXmΎeَ}!|;c" u*Cb9j0ZzA&O ipjbA'HRͪii&IKY$ȴ!|B f,S|O=ٽ,NjrVF:]YWՠ|u/1G/Cߍbg:Yl34Kz6hwLrݗwCwsׂi+sYĜ GI A 7i=J5KH5K߽F& ɫ/wWۯ?]e.?ᾌ߳U j ѯejkjo:5װԬUOż\ἆ!l^jه|3ɑA}P |?書=G#;"ߺj+9 F瑈`$ M<$N$,w׌$p-&Hڷa*]}[r_hC"NI$ITC$p ɖKV.uFq|w{,mu[EQm՝/9ٍʈGΑKKt%X -Hnnj2-ia:T ɸ*xaYƗւrvŏ(1]o#9sWyJG`le/<,ңdx>OuXu !wT5b.*(D"DMt9ȞY頶BbDNkBᙩ[Be8EMnnxad٢[^_A[s'Π{o,pc=X)-q/yIT`Ҡ B3sCCuMI4 6(hb9W ]wwUBy4:yqzJ%eВ<ː sEUTԞ=:Rˊկ[V jM^ԆL [6[iNP]Pa"X.…\%/pVs"\4"s< HI;:Q#D eH)Cy̍`Q{f߫4˵?WgvFFp=u6⒍f#.nax"PϬ:Һ},a[mn'h^˲4^&+&*6%ZBHM\} ;wPŮo`E(ʏ>MQK5[1{q Ԑt1i AjT ӗ\`n==y6FR}s+hOV4zan<>k;e6dա'_'5uWYetrsr d#"KY+\jʡr4II+$RdGQCn Ȳ\QÈpX]9`ZѢ嬦!~79}>rW?gNiYڪ;ݮNx3фM=[v@Mfީ 6BD tҡ]tf{h\x>wwF݉ᄷ㤂oy LLqc98mdRF8+-cœ `KI=uL񜘜b&kzW㍝z?pܿ|TMgNXf߸2~R(!/CPӄ+Y\Cǽ)^z5 5ׄ4oi-L;ґ t+>NK>^UR .&xKe>5s־0isʻA??2>L-2 τt?tŠrFyl􀲚{zRA`MDYj%A%AYk2 5Ow6/K FyN7gPMA4,ӿAGm(5(&# 7ב֌Xǂ#< xаB1O6M.ۍ_Ʌ2> h_~wh F|c<VՂ'G G腰9'ўl6cNa)cM"](x(Mp'?}jUI^Ok%`dTZwiQZv;P֝hSYf2p'5 J)'/)ʄ62 bB$4UO/RQ:e9Gg@E"Ykc!+6Lڧ|zAycʩ;cT5٣xW}&0u7Y[OKAAL-&V/4E}Fq )W yKjB XC]ti9 Z)I+L,2bw׷;4]b7o˪v_Z>^S iDBY2\Jˀч8@+cd.3rw(&]տZcTzhKl~H%uAIORDŢuY` k'ceVg'L^D!#"K 1l-r6hWaW=Kէj|k'~&Umo*¢׷/不= )7smgj+Rf"KUU+o7ݽfԶl!e, ՓV*AOhs_ q]LJfFfXT=D8xy}yA+&7pYiӎGq|}'@U 2QXl Hmph2Y/̺,%p:ǔY4䠸h GԐ'(hSɐ0abQ BQJBH|x_C^:dɘ1mhR pD.0D I'͍Ldx`7bc!c+蝤uI!%%BϤQ6ZU?LLETANrpbiJ$Q@M .@$px.'";bq~8 pm9^6r]69yRw"pɬS%2%>?v7Cd -^P$]PN)TĦS th4h0Ε=X̀G/BOkC,uwt"H ˬRӍR\gt]zHߺ RPϺ2Ro*#A-0ϮF,f!!-L'ٯ /ڦTXf|v+U)^Ωvv>x3"μo>O]?ryQG'J}ω!2}BZ>}4('6y#cЀq1J`СNBYW+vIv-q#bV31+PqmPِ\8AI$G.B DA€ʐgI_,p+N "y+ڲg*#jlQ> >6wh羾D|a76qzmWKɘ[DŽz9^Z%7jV.Ȕ6[K˭t@"COD&EH*2#w<&]hB1u# ɷ&ӡϭc't?'<_ӽpsZ0S?WM.s}5mE^լ |L_poz5C_4}K;銸~8'nx@zyx5W?j tH?6ױ(]l %h@Xm(K2Վùgvۑ݈"oj[% X ?ʼEVwY 4ǀ2Gefăxc=ʨ$&^Tf6/!訴 BxDkB R2CLܡa$z &'HB_tQ ^R RcF'a9´],:PYù!H!iЛ΂cPA m+y ;nD2'UrjIsMґu7~匲hX:? ۚ5m#Hf Z{?{O֜ȑ_mc>Yaxž섢N F3#/h@Hj=MUVufV)R+@b[M`((JASc4:BKHXNG?4 ސh@<VKv*HoC=f|:h~,xָ/ɪp5.s7YSQrY\*&r]"Yi~]7%j{zj4 k{ܳwÑ[(b{kd2՗ɈX"`a%Pu}SƎ M?tQk~+E-6`MH tH \u p=]ֽv/,_'tV9_}TLzzm?4V{^\,՗ ,8(;Pn`f%+E0狥M_wpN\˥F{_f`z̍>zT/vHOƿͪ"; B-3q[gb|L7mӐiofY>0T v0b:'}rpeqBY >kȶY֪?/Z&khh$M}~L] I%1}ܛ̝j1h%݂URcaRkXK^%/H ɟ盯`&ӛ|XFT} [*Sox~熷@$&ycӇ_.?u_/?{ شA9hSckj*$ c^*r.eޟΫ,u7b!|NrGAm6jV4KA+IăMpA;DwQ3c"AI-<T l`G:%饭 w)*kgϗdCI xM,(H%RX/DȠL=܀A3 >ΩũS͆ڑ9{.;࡮3\x[,-7U]i-%'~iig)-3E4!a""qU0tj7 \jdЮhrDBzF4S"ᛋqzO9s:ɥ1$NGE&S";o&V)=1H歩a䐤2 #52(B;NxM箩 =Zd4tg' ߹Q`T' V$@h'0i gNAX(|(% SLF FZs"2|xlnR p9I_pUK(RDhL$xX3^ʃ\x;I1לk1 [d&&.SF#G=w( َΈƦa=Smqvƿ ԙOdXi}n5r.`9V̦("@ F/p=˃&t=H z9]wYADm[I+krv"iD*E1ۘLWS c;S9HD+7R b;}=zW>YJTE{qb4J qBEf./AM-ήA OCI^aanEr>Z{Ti$'A[pC}FCR~`ޝyny˰R#meXl}~~o|46yUYettst%T8 1U'(9Nb$%pJ7TI(aHI|G .Nok rޯ y!J ,je=EoN+1wtNQ#fQh ] NLXQ0Qlz4QÃ=r;(ԓb?N<.t!FĨrL;s4oZv<2Li[6,>2әy l<2Hu;&ƹejڶG`ߏxSTJ9G]:G~pOƝa!tסWsWϲ^g@y3RTyG 'l.ϻh L} 2x!4pIDg|ZR\meÍ=p|~g+6=:nkBz݅ڟ(Ԟq.+ ^M@wlV}zǾޱ h%[7u* lUSYY,k* B /lPqx.U9Y+$9dg^|ˮZXS4ANpNt$b*Jas[+81@`Yͣ%?=Ԫպ`!kO_&xtV"M̦DL^jt)sQ4F enFp{0U??ͨrΥG@?g~ Ya4ZZV0xaB04^zJ}K>V5>dM߮e2[AXAh|F*:qֻRd剉]E[>.ZwD%@-zgթU&ye4zl5Ddݺ3}Q?~RTT*-(V~ O[B9HJOXU`Lka1((jxT8!R3I(aaR"r+MjZ!^:r\5UsfÓC}L|mRu.aB\G>nrgTd5R-ny|gʻYkT)sڸ0k="?涑VP[OLH:ՖyiQ'夲iwZeӇ<`{UDSImXWiUdsϴMPbeھL[c".qװu euoy]ZCVH`b9[םN>J2 YȆ*$.rbYs,dPJ(QZ٠U&%Qy<%!?RTj,޴C/iO9:orXR%`8#R@XQ-$ Ll/c;$ B @ f`1VYE 624wFm6*GT K jC&3gg7oO'Yl ]{6:b.M,_+n)v34=5KGF\ n(H&F)`J. N F#c9vc9(#>q0QX1-匛AbFaPߣʟ'e1{^_6f3 1c|K 6•+nMv>dp*bt:5|iG}]v۶ˇstjF^yr;>$}n<5{e6> z׺Zt[+FvfvmʯFW~HJstm9?nA*O3u +N$h Ry\W B T~a|#$!L?s ~a A<ծ`! e*-}axf7r9f7Hd/E oQJ ,Í0~颾VN[lBʪsTa8{طN]& YcyN d=;r* ޭHD~h>vXח ,8(;Pn`f%+E0狥M_wpN\:˥F{_f`z̍>zT/vHOƿͪ~ 8í31yiH4O,h}O*;v1r9f8۲8DmYd۬mkթYb2\CC#ibdweMb/d8(UA+޳-md+(LRcH}*ۛj2wʌ%!i;@")!2v>>O8Sl~Wҭ5/CNUW أż%QտnC% Է HL뗯~~~~?~^ 2 DY FjnM_OeaTSn6üT\r˼us^d1 Q].H~PwEP}5~Ou^'~՚h&Ve@?6m%v4 fl#"ELF}{ܥ1׎E\X0&.Jd^ .A =gm}4NKi}Kl8Sbzs˕E@5zUn *a◮,e3yLMH%f\3 ײ;ʇ&#E6Q"!͔H"y\Strr IcQka4B7^$@Q{0rHRIpxaR[ERcib 55rӡ_c^GHw l"1 $$?͔҂< AqaAK |QoURR`>[d)̀i5J9XO)>%P 2BF{h*őp2aMl#R1P*2јI&rg';ձo&W|*ێ>Ïїamf}]L^ADZU/3%ڊ.jI9EZ#FJ]DiR=J);tm\LAħR&Aٱ(9}A'X+} @OIXj:RjKf($Hk#{fZDZ|t(jnFF\mo>aHKNskZƖ,aش4/aR!A@"JQP-D~Ze}O}TLW+~C1aqOYj DF)#}F8븴K20`Dd 8M(rneZGEdQ0AxVJHH!$s]6p.rcvOh]ƅSqOa?~EiEMQڪ¶^>:!8yOJt*>F(5"O ĉH}.u9V)>fPPQFHwX962%#ўfȝ;hr?;{9]G]$/VWdvad4}z[Lӏ9^OrM>{z!R6 #BXYL^jʈhA #(H8H-`GeW}.eBmۜ!08HE'rZGWOP2s5,nz9JE*LQNM$%`pJ7 (aHI'UwMy zЭs ~-!JV 6,je=eJQM>xҿ{\C=apbYBWLX=Q3Ql,qi囂Jb'm@IVv;b;Iqnӑf=;&.9:m6IqGRbIӯYAj)?N 5c{D8Yz{pe̛nsO[/3 J;lPv^Zl ݈1jLk4Qj$/ q3a5ĝq'$9^Ujvn.C|_(J7#L '@&]p>ZH$ZQ c+5.L-k}0t Lpc 7vs3q?vdQm:z^%IA+Bzg=Pfhi*l&DϒUA&}$UQeM(&ޖLET:a@Ջί_uAZ{n;`ge|m@2sS2`K.-2ry6dY P)(▱(.6l%7Sw};Tz[ίVoLO@?Ёt^uU>RUlc\^JΛ:;- 4ϮKiegw:`nT~JoY6b24}WLu0Yf\^#hg*ۙڳ&bzSגm` *Vizsgq°L":}5^Uyhg?F)t2b Y"z6;@hKD"xꇿR>R&o VlPϊu DK mRor,g'IR$9*wQsZеz;*0~ fjelϹs r[/v+L;lNr_ w"(d&y0ŶEWatSaK bt1-My%ZygJ]ߓ}Tcd!Z^(D.vk囤^RgFҙ3yK߃e~3B9νO/6Bօ ~Xw5˃al&u 'NsfV]wUUSu&x}PCiR# Uyv!,|ι0Du/D`˔ۢlRxm{f0>+l]ȄU&f݌9vt׶h$2?VA:$Y_E!2G,ב=YTJzhPHܾP$%+~#|6C9R!J --%ՆH-h~QTd}y{)޺E֥Br=QURyoaˣHFNw ">4K(Iy>h} |U}GdNo;61vgYMh{^~<435{s~(ikҹCNyL ;o#̳S(_ OGK!zƌƁZ@1NWL#,8ZI}ߦ3O:tW ޒpJ/UeJ1Yu ʱummYDѫzQy˵Gx>}Ë˥Vk/z= .$R@b2կ\8|@XySۻ+ 9B'" J" B-ž`0*lpYeT{15CoG.䲃il`jYUQK1HBɥ·]!^Krͮi^1EX5N]b'bq0ÿGJ9E0_^-.m?ye9c=@َg@Ng,#?Þlk;M΋CKidZI0s3b2HP)`Y*mABQG@Cq]n"yCwڱ QYyC44ʨN mdXS᭥^sPv!A9vHqTQ(xDdX2w9lVJHH!;=8tyCQG;x 巧e.1vN3]bG+FG]GкC|sY'4Okr2Ns7Ս9ycw'(2N4jN]q8vEO]T@  +A&~P|ˎy4ZKF+fRKJ/MKOgŒm30WȨJ76&:4uAKmDP$ǘZG*&P`tVOO}4/66էK@0eۧƍi9}96jZV^ ?{ef・)RֹCؙS#06ȔX2D]BKBٺ75]^Eƥv3f,8H Q"/1rNJ$^`}vjT>{>{>dqR>7t>{>}@Q}TxGQ}TxGQὊSTqY,tdJ$h>0}S b{aˏChr2=Wr~1a,Six`,(0 ʔV0b}tFwN5`ΉYDD(zs[+.8\,E5*Cz ^ wE%RX@U6E *Cz$"2*,ŪсeSlJX6Z'UȐ- ޘ{ cN.kGwJW+꒼kE¶ Ѝ:Nc! {>N ϧ68V odF?lˁ\NzO.G7`D_-zg՘;2b=6M (BZ"mh-勃v}`2Bt8w*߻T*-(CT~ QǼP9#y V$ZX 1)2* s~]1I(kaaR"rNbR!^$W$ۏi{1h]Ve떄5~vC7;ۥp!϶\[4n~?#s4Q*zfQcƁ6.P(|O'82 IKkm#G0_\[|? fp;feO[YҨ$+vdIVKEYR)fUEY%P2zE_(#DOp).}Ԗx"qf"ܒ19%vrYmgٌ[ꇕ7tݖknPpi8(g߸>4 I^w0GAS*Hcrب/$Xc<FHȔ7pFBb%#11)DXҠ0I̥+i$"ZJ;ꞴO=HIXIlOB!imj޹HZVkxEi{E6DUEVx8LᨃyrEP IX8FҎYRCgZn8z<자=l"C~d 8Z1P{n+kn1 17ײZE1 3Ḡז`AJ)#oIT _JUKbo&H/up2yq y^IwD@4:&Ud KFJb,#F΂jC%ğ*Ui43l:ϋYu/|1NqcjAs3TZ1,)(FO'uGKM0HfAgB5^SŌ (d4i uEHx 1T4C=f}.h~LTָɪj91.uawgaoJևw-}XG_|췚T/ v) U? soo{fM~ƞZA+5@^`k+E]oML2ebWX" S`a&F LY;;|*g PͳJ`tsiC(XRI6Re ? &¨xX d=+?TBjQ-C~}7XpPv?WnV0u/6 9q|6aT]S̩iݫ'#ZQ?yS=xu?m>x 6# b(7RKFh<~V]2Xaj3:Gk#x0˛-0q%`*S|<>|sp7mPuT:dۨms Z040ԥ^}|{BccT-:m.Q-z~5? J=<eI}3$UϿ҃5O4|7OnQQTIev?)FLnIBϯzs~~~+8e\*FCH@20|hCn|q9q:F`qݝD"EL#}Mt,˺6U?ZE_w#/ +IZ1BLpA;DwQ3ȱH yd60#Sֆt)*r3ϟBQr-WR¾&FY)K"dBK`P30rS\"MgEȜW;g^aq꺴6亴WuK_2qG;Kto<;-f2f4dKGixcJ!#ȇg꺩Jq$\@Ù簊&FfV>B6xјI&rg$syL* Ǝ9\g n]BeÀNгMC}{۾KiۀVƠx"s cV'xEaX):+f0GD%R0|%gK׳Oab/?F-eǥvQ *EE@(KL,?(BUyzAItF pX TM4]`m#wȋe O^88*0/?ux_}՝_7O~C^73Z^%T!bm-D 4gzŴ'0+>{MrP+R[0EAZaLvj߷mHo]^R);ۮtDp80q(AהdvXY!Kl8 0(Eb![XcI"!tXɈ|ԇ֙=vlstPL`S R{#"zLiK c 1 JDZx 8M(rneZGEdQ0A)yVJH?B"Iflv]XBgqS:񢄝9.Ҟȹ(ao/Lm6 l>;s"#&ϝqp I}M;]Z F`+Ʀι/PS--A*9jQrULqU!DCEX9ր"c!CJE4J+ty '$*/B$XH1"H# b6Ha`b1h#ж119{:Nqd:pN=߁Nd:tdznOM+ߝ󅹟m3率|zXR%@[qigmv**̛bd=Ⱇz54 7 Qd-Xhm zetۨQ1,%'aq7x|.c=Bn)*zEYH zAV@0Fb]UsJTRy[}Vq*M8aC|m,b%(r3tuu.icoI0,i0Qo#b[B37A~ UM!D!e!x XSVTL"^ˈnl"hnDdV)6 FC>-=v02Utagu[=:\-vY+;ƫ>>;nNOfUjK<ۛȭp^LL۶wz+ w1wкSn{XQE%vjVBZnݿmƻ;o=z^j}?$[U{mЏsg/ #⮿h>#ڬ$XlMM@[hSyISpx3ܙ_XKʲuӘS2㏣%nïU%E ("G  62%#ԿOMYUS_|uTbBqb8Iqʼb'/CM.IF^~q -DͿ6./ DB<7k}@j14 jy|aPy4BƦits| $:,Kp3b2LP8%\b%0l7dA;vKOzKavC&unsk;Eɚ eR-},RbvwǙ8c̎1Oǘkuǡ-|?1cuLL5w^^=ZWc'#57lί'2iC%n=4>;')f$b[-<|q+Z3HW;eV[g ;L`ko"ixdﵹU㶇3hkRz4x=]| DDtҡ碯fgh|/c*Fs%ZzDS.PHx3jg>O(RFXp4 l*Ow7P=$LzlG0U-ds)<,26'\Qٯ^ȃ0b][s7+|:;. }zSy$UNK1M*"K~6Qx4I{\C n|ݍIJ5DakЇzu^Sj+'~WjyI9M􉒷baS[Q'sA7l)0 QTJer}=NnHcJE-P !ms̋LRU-r88Q{w=\ z8*=.p}஧z)Z%1!Um/5Ƽva,kL(2Č NK=lk٫l aCQIL)YQ"hXUY4I@$dʲAşR@Ҕ lF(1vlvFI )F\(G[[:מG,2n޽zMߗCܩus썹sn+v0l=̴G^MЁVmj%ӄC9Қ5}ua+Hrᤷ8kf [YvF>Yd^u_ qoa<qUm̉H:YL) Ysm ±mlS656~T)6lMPA̩k+rh~{.^߯1{ 0:['OM!$F%+b'US=+BBERlHOTd)+mA=bI(UwGѩd QQTEbѓ&4B&,BQ#m V3 %cglVSe!t, U/ vtzS}kf7w,6=_7izhͬZ^yrz7iw;]NH;9o|K~I%^uܲ4_6KؘAelKip3y25o:yq{73.6n8r o&wd&cuT*H>*i5aYZ8^TV̥5C~xU|e& o&5=Hwіr6+. *&C  z0ɣ1G6e c*! ZlŬ̺T4 %Ѐ)YeRОi@YBJXm?5uWƟ9YUA :gHVפBLL"1%JuG0ql2e,}3?Zn$eHBrkf\CbGgfM]Ay4Hr5l R+˟W"kjWnK9'giEV,v}⢆_UTb䯭߳;?7^/^U ƂKbއO2|{1[ovI4G?Lҡy-'wG!d@ =_׍Zۍ5Y],?1R[3Xgb'}^nZUͽ.uc,o:y#cT 8gr0r<بemV_?|nD7"Hn[Y2G gI5O/wZ]kVz>;mC^?0F-M!~1YPpj_5>M<.]@6RQ!PPVhEx`e[$(ZXeU,ib$ `o$a!]}8Fڑ֢f@x]F*'0NYu55C"DfP>R N':<:|'%cq]&[O=9rDy-bޠ+r/ dA *y*Aڙ_☃8+q~AB-oͼ5;`֚B!5 6@k$h(.`vbo >@BQ W)}> CQZ`ՆS[Rhd֑& /yc撮Qs~r<xkSd UTx=JkqIEZxͫ ;`]Q$(?z߳FؔE0%%/G۔X*t%UP I7_'ձus!H y4ҶTSf%:jtȵB!$Y;).F͊^yx:)G}ݧptgh[CAaN/4_Vf\!z'up7̡ܳFڱ.s(x Lnу? 3YM_p4R6 J3Ϧ-3ns7b1ex;uh.L@fz}5,8C=9#SMQY5:! 9}m|ji|3v#yPHNj<L|mr9 'sͺW?{7ލGk@'z -oLZnCE*\P+JEQ"FgYwJ{U@$ZN S 6d0\OSGX=[Tv:ȤeyFG hc*އBI:̾Wkv?(­Feۈlpx}b6f1 A+4մIXt_x. 9n5{;It]ȓ'/CSJQګA}a0ݱ NbgϥhM&duf+W9XR;j TdmEK6c J r@%`k C(w!vFΎRsu=k(N7M59_lÏ}h֨JԄ왕"U/}@T ~4ZVRH>X1 "3o2z*Ir9A6I8%ؼY6לKi2d 'x_b`RXrα,IVfD:+rszWǵ6]Y_;Yˢm*M1ޮ-Yu'b'brAL!(8vSBhomed:w?VjVhw%vAۦ` >(&pƱػMד۫khVeAҰ߾=e/.1C]_,]O/h0/{OƑ_!e6Ze' 8 SF-Q!bXv$v7^Wt cNp~Aw0yzZ):o]86GЭL6{> ijֻOK%y|~VYe=tI >P1@CL8C#@p&eH%i,Yֳt|O 9]7w81:yapvKL . /0qT:ϫ:]+b)ǩ}eT\"ʆiη56S{'GYe V8I>P9xZMlKLH__#ǹZ7Dx=ݲW㉹ݭ hFR_zO^.^!7Q4NScp@ :g(@ `T<j[#TbwK&}&u:>qH)2@Pm]zR{&V"JIXWԤ@AU:*(Q$oA_h1O}D@1.@)E/lRg;Ƚr63+b^Ó״#2s-͑Sɛ&B5 Ys}|hE[u `,PJ֣>EJB%pQ9mHcέVRjr.BJ +)QũE&:޹:7Hth`B 븢Q1-!&E# ^QN m9ѸWO@uZh.P፧:c\k\ q4$O=٥U×I7Wsyyj˧7WjZ/?GU?4#3?f @mt G3EC0BQ2Tw}:J~:78E2)uѧ*<WBߤF7WVDTu@D墶 @F75F+YYh/t';e)!/`&DM30$:&V`T )(%4-eQhY˵pA#dPKwQÿ3hxGU:Jgru}wL2>PaLf=_As1[' l{ U|Yf#k m±7^ƘhheV_&w?~z34QZwS{b-<*Kx&i\G 8(Q1)rZͥZT&]R}LNRO-lUj~ >M]io^=cC\rsqX>C<?N,hޘg 7Û8,io^Eҝ1!{lc8Ojyvǜ2{K`;Y\k?VS̾fbĭg:V&]id9B{*{ ˑgw*B$]84əry m8z嚐dȶ YtL4ImQVi<`d^Dh21^D:υfB)2FܡgՎ,>"FQBE! &cww1{:&N3$CpoVOoZm_wIͽN[^˃;[[Rb ^X&M1K\0t)(D*@˽' Ria\C䨒K*?sW:w?yp7ӏnpe&\Ѽ7OׯTg**;q瀟%_y\'Z3*ѮnQN\{5jTiv`zyUO͉oo.f.+fk`KpFrsKNx263[D Ҳں+]-Z}edyDx'YOƣFk/7Njٺ*#W?d۪mjrŪu94U[>\|:&i)=4vDsYbraߏ wn#M$3Py4A7ЦQ p&ח,a=7 nc1G~xuD7dgӻgξ2}ߞ{r=6/MUq\KO=#B`uehXfr ^vIHm>w9b@F6JuzEGl$ڊE%F瑈`$ M<$N$,$p-&н sR5WkGInաBfFbO8%]D(&U4RvHђ!13.$uu:5کN@%l[ZmJB*L^@B<7՟U$(H9wX#+cC1ƦCZHoZUz-ײjYѮWV.ȔeakZ/ii#J u4/ڼWmV"@Ez jч2jf֕!⢀EdZsu" :[{X>:2YSEZ Һ%!YhHy)&SmUHJPN e3QE @])\\Xύ$!\`P I}' g{* r>j̫-57[fԸ_xK))^hJz![j/d)"W}.@)PM{ hx1O4S%Ƶh7@x_Gtqډ\]P]ĸv5'T15)龜A T1wK2Ym&AF1]VCdR` D8eMz+eF2 ,Z#ՁMcr뵳B+ϻ.) (Okm#GEoٶyl&y`_N,,KIjɲdu`ƉYdȯo]wvsҺbOr7ӋYf7 ~讠f2;(V 6x$cPyCEDC :e 4hG»hs}URuE.'Xq6H}2ג39q]_CJg=F5_<{ jMQƮ(a,K+yMg(0՜fqz3,,t1f)%-g Rb$v/sg^A.,o+AxZJ|R]U]G҉' @(w‚r45B@TVp)CxRNdQ 1P#xA༷96eX(M]B;+Ҟ5[߂k8ׇZnK]};vg&󷶺Ϸpor;ƫ:Kuٞo U7Uu8jCd_U{}}w}f#.}uCōιC"ă^ ϋnbuX_f|jgqr+Ftno=%$$h3*g# > -`XR|;ԗ}Oҙ.P/ ȼVG/B!>|oݑ q\L}=**.}wPwAS`ƽ[˯/5{5o},Z/[[UMћE*Sv@-0K-h3%C-T{*EYrЛy vOO~&ynumjav7i`MZ3V>qU]/k]4sf,y8|`Y mѫ+VQ=+X'|cj^]l;y+$ԓԮasop~]qYB 磬rǴ=py%vG Ơ{ Ɏe:!|Ľ,stܰt>5w,R^SZh+EeXg5l4=& &Dd-v8ArbADh8#J\)c8p{c89QAdjB5[KNm(O!`h|5'TQKRi g.ǽп@% ._?}O_~篔_>>|/rp6|y~uilo47bYn|v-~)٢pbؗ|3Y#>J¡')]m5>&ŗ7a$1@I.hDcp!Y`8QL1IQZ(M#iކRkGhy4e!iLhvCt$ YɕCNu*~gsYqQwtq=+sv`8Qn%X>t&ZUnyQR,USi1r0TLQA>ʆhۋ..9&OKɵ&&kUDFpje,H ̑nC,8AI kUӖk녥h!NOP. hG)KYG& A30% hEQuRPN1)r<QpsB'4aV*)9 MQN V ԇ*# 8{^&E{9đIaF'e\@VvGۧ֫cw Giiڦ[.eDqQ&QA$i9=_ttlgGq;]oZxW;;0@FݾFeur:K+߾R.el*kƌXRCߝqG"Lv )PB zE$c{"'PY-E2DF˄%HH{+Iwf3%6h>j!ZPQ`"s y_y<4]?Gwi{VZ,YJJfP/u^ m8Ϊ:HUN@&.8mਕi WAYemDY]]@Wc02еuʼnngHk~Lyڿ6%+1ʋ hD@`Ō" z@@,*[qVlY7PWB9YŘNL%iH֩HmQG4EhD՝d~ S<~Bkv̼"oh(Ïmg&XρUWD3uWC3jZh(iYJ  xƂ|.\07t 7Yw6g,DTRWB p(V['YIk>I 'T$0`4Q!) Hy-ru/˻囎ً7wŬ}ξWjVSinְ,dMKrx3J2Ъߧ}Q۩}}`\p8b6;8O(O} k=;Qp>Y s.D\j/M?OyB)1?0o+\4-ڭ?Y8dL ٕ $2椙Ps%D(ky{`B"810oStƄdc (F#ߢ7 >ܨ,!"*2CETDSJj#iZZ BrxqF)WھՆ>!#9ntLOc;i]myiJNj!+rvd2(%$C N/Yw\2S8CR&ȸ1 RIƁI>[΃DŽ9jGA fOD~q#6/FG;^ԧm쎬SbT{U~KQ*w{8sW1.(cT WL2Z (O=&|TB `;Kc9ŷ?BpԔ2J^6A gOiQR>SՎ%IՏA4h G! .:+R123Y˒<°N^&^[EBXa۸wÉM?\zs6XbC p)Ѳ$ds'ZސTGxk]}i:/M]7XcBc2F=Y&T輪wzHP؟"Ϧ(uD e'*L$9'rY#3e^y^A{3$(vL}9Pǎa͍RYv>mnz/C4ʉTǓhsmg A7u> <O}}g]'{QHNgM.bt 1dŝ AFRN^Ӥ md bB$4}*w  b *kc&M9=~n# <-K>zBS0Զw.x}sV2]e{йK+Dz/mFʲ4^'r%eY1݄d}PY#6-7W)r3:#ꔆϜy^oMEu+4?gȍZ7^{!eLuvtz9dz]ُd3H _*ީELP¨oGRbݗ`6:i qlzS" Exv~5v^}70TӬPKyD? ,Q|eCp\Efd.3rw<ӧjJG{-%)>/!߿}00mK`HEj'Qͭ Z"jeT6xfƠPYPW=U̐ pRYv1l%y ǤJ6#&Qq1il]x\~/I1&{c<:?vAɜ iMN[o;o1_{6ad;2^?zOUQyhڌy9|}ǜ8ÒWWt<(MpUEF$ 7䂠'E6]ڜ/j84|&Ն׳j+X*cp{)WB7>/Ҽvvr5s?kh8r\p@mrh"]ˑr2&g\HHHr+Ch+/-%̊ބ6%*B#L. rc,[p6#v8r/]mu j{9I.29("H L' ~p$I(t]ݾK:jN&fIwLQ&CXEPF HE_xF'MxXm8U""TFDG`,g $Me0\Q"U p#KgRr2b a8l;hS}/i@(6ԉվi!j :hvۺmO;#nHbY zhs1s|S\dllDh!\c;r:jړsGz/SD) J OFXP)HD1|)It ܖAoO< Eñ D<8-XBp5+4==_IV/,?d]`^묃$0ױ ⫒ sHQ4`Drv(QeB{{}wJ1qNyB]/gg烿|qV j VV*-U2tR/>z2bD˝ Lq r>aJ}}\[pLjYJ;>Ɓƛ5]v͋eYvӈE+S4íx>Ml;VEQȑ+[)sYfM)8e33X2j ΌgP4dVhc8>\R8%uQo RG$34gt UeCKlض\{coYS: cC[k"vyX\|Ek8/LosWcwՖ.i]r+o , D[j dWj0n@YDIN_4qt9)Yuڶ(01Z樌M`*[ttpnH"'AH u`sͽC]Pؓi9h_^ZXYY"S2'J$sM:g7>Z匲 ?!MߚkkFP3*A+uHgc!fJI:I,'! =. tImu<+0%z6Fohȕd#Oh2[khsj:91dSwgyP 9 (%ioM;M*${*Nts2wO~&?(_ĉvGF;>ά@[d9:r&g}4@#㑟Li]MGiſX]>2O%^ȌM ז r9\yc J.9_/gtNWs߂Hۢ}m_|ZMdvrr}(:I!t49[.heqY+GH\>"\|Q3y?LփYrD[SݿgW~n/~v3Pi`Nq ?~ͭtsb$ƿ^Nb6oi-'wWuI{:Ftv'2e?=sx:y8٫`{]Û}~x{XjXu 'Oc OoݾknjuM-Vmѵ ߢ_ 51XO+ޗGA{vc{.p Z&x=?$ $≄21^cv6*KD~J F"Js[Ҵmk{ Mwdu-)̉ FILI@O5ʓipw63O__JoU iWsnWF~oeɮG|6~pos|ϣjjb9crtjCK.ԁ{Q^HRMW{J:ϻA _ ZWD zg:S@8KhG>xցҪ5edYH~`kseNƴǔou:?;L+Y12"K.(g0Qt#zڬ*'x -ye2mt+|-M Q@:\@"mY[Q9(Z[s)XX&OqD `? U]3s.tȃηkeݺbFu|Rh@9԰6/B6V9YfpeTrcO-$d ,}cbN[IhSRl*gDŽ *ƋДwڜ}F AjPլH"n B0yg3sTKl~q:-=/~>\hmGtY<[T,=42{*||׷wE|vgG״Y,i|%B~'6rgܺ֋믭et@In.:Vnz~l|1=o?:|̘gq\dzWt.E'txv{C< 寛b{ܦ{}29R}CB\91 'j8Ts IZ{j;ZI:t o%~nAX,)R QtE t JV[0n,~̍WwҍͭtcڳӆqhCsP.8hT-OTJQ'`U0 YA[\F%#s ŢԧU܄Hu w_o=ޗ֦C 61҃*f3ł,ƥg(d-X.^) &iI1'_͈'A:Wx. zA8SB b$ !PV/X)ͥl84EdVZZa.HƜQ:9aR@yC}g#nkw{[`8|ۢz=5z߻(& #r)nY_FLML&F"RJP(y B;qdQ/"EM8l#x# ^']+7w4*kM` *JY62+)xfR Z`;0đa?;#}Yc;Yy /C RY6 jt$:Au、_zͣP Z5>@fz3~!9 JR8ۜuƪ-8R$e1S)WQ_A u7ވ~;ˑlsVt栣o^eviE!/=V 8P ?e}0 eI^ o~z뷺e[G: |]A.I{e(\tp] !"i zWE~X撆PgÂQJ5I*!z' .a ֶOagYlR*Ǭ!A7i\'?zZ(lX]={8gW%QGn\x/}O.wzl5f[r{ccNHUREDQX2P@-x|c"PQ"MEG3 X ObօֲJC1cMoAiEbc̜S4_//ya>Q5'=j^W]]їWhT|D(=4$)G^},YTS1)?wLY0DSd4dkD 1D( =9R$mSqωJA_C}aj9YEx-bc, O!Kacvzgfq>LnLnfwZ4Ix.@Z/ѢS1;B &U!DWETA U TTF`-,2:Sb."Zj>ʜÈniC\6=2ؽFQ Ar5-JdJAlf,R&d@BV;53C1G-cM"`A G(~ó:$x̜x }Af㵈h#qDĽBLu|(+2JUNEk %snW o[.IG2$3EP:yzL8 "'Jkʜcʧ.p8[yJ6?ޟ,ZTj}8!nLр>)y̷&1#__E-݀2mT:ɴ1Cɴ!Q A}ϴQЎ6>` I(,Wz{Ui):#yQ.d]0tf Sm\6F "eh,Љt1%VIh`@WDRF![-T t6eB@V,; ZpGZ6m"[ G1ŜBtc7S^=]0WC[cмB+lE]C?Wwu;_6Ex=ʏߑW, !oAUIÁiV;LW+ь0aSW`j \(dZĎpqJPWlQn0pZWZ[IvzIX֗wdxk.\ٻ6r$7!,063X`O[gqI<_zQKնı)⯊ŪQˈQ {ĕc!B)#2`UV}WJ{q gqN\e<qPU̾L^\qŅaR7,(+('<=RZp0R@7^7(epuqop dzʼn2zڳr\MQ_X`KgLBJ%Tά%(/ɘ|~!cVQI\51 N4YPg؇o:b6KA`?W0=0.WWc0eF|}2nG_?}IBX cC*rvgB]fV1}ڙm4I6AK+h)  +wdE?]L#|e"l+"Igpe5͜~49$z0\&)SN5jr Vc'WZ{q45+iHy@*lN"E\ejwqWP\)#`I<86C_[#y^?M ~ߗYZmZSGYa^6F[ht<$%ݛRtǷov|$5|KvIbgÙo D&d/oKx|5_dSRN ovh֣{2dÛߦk!.h'dwUs8H])߰TJƭOs"wxJ+u),R!@i cЖC:ṟ('XM}ʉE('ًX:HhEND9VS:8JIs)ƚ$EНE\Z%i{;xryu'GpADeG1.'oU kyq0:unv}x}QS>ipOMx61T雟n&̓l1z51LtlY풑hP_o/(o!--IA[[ruKmͰflofy|D  G0bv6բps TkouɶVjP>/ZC8ܰu}qdqL`<=XYӴqf Z? ~ ?+?ǷǗWó~!KPOm(/2 +<oTi W.^# ?!w>Rǟ>7IU`$k }Tw47kآiiߢ]rKۮ!DNP`AL5#>لևikhbTŮVz ׮$h%{<,SskJ I@$=a&]}G3E88}'u*04qZLhhI !3f\H?Pbv34t&ȖfGiq.G;{4?QnX카WK/㿔3/-,ESq<0 )Qޣ3ffw K58hk7Q7A sRfyLO4)olPV@I 1I=# dюA_gl}3j<=Я(`MKQk ^3pk(^Qd*#10d.^A[spJDe Bv3RI)g.pNL RI jtblBoIJ2kDi:?!g$N㧭8:r_4f Ktd+қO10F6b"& ΘPNuZч^A^1?S?ǥ~|3 __c9` 翠8 ~sjIO)[/2\ wy|-3Ŭ䣼B Semv9_PKtZthmh.{>Hu7Y8j,=>pʅ4ªգemDY].ht}`ky !-EOfǔëI^aUaG8.RwVxV #"pֱ$'Nbc74=>=e]JE>AbLJA!R;gAQK2K빱1$IUI$Ԁg$O1sN?&<:#g})OAZE+j@kGW8OѺ{tJ36{BGgg3Y@}=o_`h5GD! PMuGrj擤QNI9|SRɂM~6 F3!2J;"i!Sl6ıa.RfE n8=еdm LSfkWZ#ǝz7[kn+ʑ˗<<.B ƒf4Y )JbZ$Fj9G$CF^"aJNz9CMKr 9I(PN -*8hԱ'=xΙfֻТg] 6 PԵf7t Ăy@dGx !)m"Ns1@ e54SIpfTQڈ(.83!i&8J;#}'jJ&@ߚCK<: 궚LqS鞦=QlˎDk(j6̥қkbeλe9|b|9,ܰm=q2,cަWM Tu횆fik[0G2]_>_6#7JxƳͥWL)X/.HAv5sRP*b}8P{#ӝy.˄0I`s YL IN XJ.{r/@&+TB΄U;晲;2W֬3=jecdF}\0U23D12YɢG JqV(au P=VuҌG.:wwsñE=_\:s&Pz/h}J QÒA&̀Y %SHjgNKMkS[~Ybͯ 6k]bR?v=c$a1 Ҍۛ}T5,2:r\A`* %|u2yQJA_w jJǠD3)HS028˓6BL4s&(h2JZḵjGV}X='=CY'u\af^-]?Ii-v[𝫷'WܺHӆrq % QXւXȏ tkq;ʟȻ2rr2ZEL`àcBuԊR!t/|F5cUY=~ y?6/֍ѹ{r1UI{Tx>C Sj:C'0Ud\>.̒J H 3tY 7ܘ(QT(re1̝ }ٜ-ҡݗ(|KMWV-{ˍϜ2vJieou1|KK"&';ړwT6^E2otrݠ>=`C9c ^GgsQYTRrvQG)cFit1a޺[t)=@'a{=WQQ֎ק6&Mga!Wަ=pcp[,F39rґ; %U"k[\L 7OY,n Dq<SX@NōJRDBT,M̴r] L%9\mH*I+^(EH #Gך6 m˜C4_t]+PNO{ij_%ӴUo}&4ꓓTW1V]R}}<+v3}XRZE- cr"&"itir,A5|ܯUdDKKYKd4fAO &c^D\IP2Tؚ9q,lg< e,<GC9i֮Œ425C. FORؔGZب3=fl`X!ei{J[lFl?ry(Z8Ԇ;v.>F$,3ì}́L%$,yPr"4:*NLQgȉ; &D(HE8PE$\xؚ9&[EDlCꤣqQf\h[I-Q3[k,o}7r풑xd ~hasȉ&wPBD`Zr?cuF$': r$)]v8/5OZ5_КimDB*xr 섍ܔ&x\I3B謏@+e8JQcgmŁ_jF"V/r5jN )<_uiZ ~l/;0wfNLGkrwE93J#d0 {4+ͱbN;z R2~DpU W\ZX \F&}DpEþe\͎+Ru\Bn8-0 ]o924{YoSC" D_{LǛ{ףIsk9&*+I30gB qۏ7p`t1cbPJ LBV~dGW`<{<^eCbB`W(cb0b>]Y)•fHM?WNKA?緟قP{?UZz z~|(Qc:Q$Ё3L:1,R0 R*sM?4W_!vitht5of|Kav w4k>M}&}B_ 7lCh<x_Q $rTT\UȀ6}) c}p|-TL)N^?{BmkO\FRJ ٰ%0k(@sy<;`%PŃ&+ 12b L?Os"ͬR ʺ+A`r @weо;_vp#)\:x##㈋mG D Q];S_MJWn@풮+ E .VA J#b΋e10sb-f2ة4R&<-ZY3 @DPQBv:U.0G0K8 >(Җytܷ,+" i],+$1CF sk Yx `2eLQXY3&S{__%=>< vϞ\mځ 4 TMHytr4.6rQHAʆwDS=#ڹAEŎG @rQE"F2EȊ[tXD|Tu2X- I$iC MAeHYrzlrY҃%ʅE@$gd&2b[ƹlbkl),{]GoA|$}zu85jו[Rj.C䢹g:=cOkiJp=Q`14x5$"z"SӂI:luU;r:3f&Zֶ6MklY/`_FBюWnc9E}5垘5J}a@@\A QQ$VX)&D+;1ǎ9(sY"\:( }47ń% X&TYf).} {Jtՙer ʣ[;[3gK8UGyuoz^i O EOgnOȵKIrZ_t)/^+j^_<5p=̠׷f] Yѝ3E]׼_|w§{(yͮy>)wpe^$[7~ptO|sr|{L[ ڗ2/X5c҆ R IqT-FYd8t\wcKrC汏o<,'lɿ߻NZ[֊X֣B ˨y)h,k7i2(StEP*3mRb`TF(9{.45Hol3^J0&iGmyŶE[3güLaZ^8_X,释VjhÏXKۉ F+^R&y,'CPNAB8F*d:P6jF†1G. =|U܌˾{:[oИJ AgMl,`\ӈHBBЀ7q:UĚJ m}^z4`tYF2'Qrj WSsupZkkI?eAX?njziUIKN6D5, iP,)ɳ3Vqi RiAuPq#$SL*;zC1ig!2 &iFo'#۲tX(i⇣\kY Qy>F+zI?xS9v<(13Ƨ+C*4.Uf8+(NczwᆑԛƳ]û`8SD+Kz{?Szd}K~ 8ᆈ@ L1&>b'ipt' IjzmAn! %Luss-[?G/=( 댓y6rzZS;n{W0iZ^Bggwӏ* ғ]YV:׉1+5u >;7ɦ4^Bcgs7oAh%\a/mDhͤ>߼qi0IQzI~LjA1- ژK+Y|4z|erhpS[Y%W=uU]-]]e#HyߣP( l5RօN΁.Vwo>8?aD߾]^l u<ﰤuM%@NHՋ_Q//aZDmտJ5UEj6{uAo~ӯ?o|~ yiI %#*z |'գy溭Gf޵5q+ӞڌK ʵIvOjϦR{SJh@V,S:d9}Ë(Y(Hi!gu/ָjԳ}m+0}G!ĩ_Fd#AUS|םJְﴏ)s)w%$^J 9etB+JmhaUF " Ipކ9tG` 7u<絊l4 RAF @J> 2Ade#0l4ԙ˳-rVu44 bM7RvkD:f 6[;{Q^tk-j_z%yMҕ}E4 P: u@;:J)&4D45 mfޣ(3EdbbN.C6A櫓uJS@^E!MRS)@Zdz>(sį3 (r&dRuz+q'}s8}8|N CQ[PLEl$+YuK^X$fɃ.k 2hhJfMQa“0h "Ayk- aS𚗤+9ڦг2‡DB*S($M?cJ^kK1t wf*5f 3oTFE6"i2I2;)f#;=e~{;+m-;wuij^ڃiVU&J bU)%6vsl6߽84^엪~I_ZTEcж+FI]\X CY'm#v\szS~``k +Udȩ:ڨ!۸%:c&(i*e>s89+j.v_weḯU+qZZK{z S JTڷ2B%K}n4%h-%#]Tsi; N4a'^T֤(-Ƣ09yĘ]JN ɳn Q@a7ȶG$B6Rn4@2ZYf)wQR)iUȐZk&0 !BuH?Mɧl᯻z+fZZ,rv;ji?uus.#ȥiL6QKDNe% ֖GZ'VA  _|,Cπ< y =bI`XkźI*$,$ H^ B,y O!yG>@m扑>:%f) ƘH+&g%A'+XDR=^a2+}ש ]  'ԃHQk8/B6K-@*좯O̺~zs³N;;)B2 0Y+S9+z' [PRd1'6ቓ=ׄ緯ȹ`~u0$/)+63f-lPy矝S )S)k-n L۫nFݮo_mmycϼa(<ԯ{'J?5E|[ͭGO36@e9a]9.j:+BW[NY*Q&7ڪO>mh<@L2:tD1+dktx.фFB]rǘx~i7^/ pVk⏏ʹ&ßQ:HYvlH[{ jS.4t絸eVˬs]6cfO繬g/y<͋_W3ެ7qg k$-wP79j=j`蘒 ,t+ʨ@En0t1tX-=ʉI1vnQ#>C s9i( KYGO@uDE/dB+C;^Jy}:n/'ԣ|̣o4>G|]]2?@w_r}ٛUC?_Ƨ+0S]8(R:KaL䒰 R*S^,CmB>dŘgylbLR"(s.#o.݁Ĺ]CgZS<#|jd:8i=e_b>Ny}<>u]`lt>8f L ]`Z}O.3O(@m'URE!EJVF'#%FjcUJڶͲ!hL"9, d1#oA=ZkjmFgEU.6XY @z&[Vd`z# Y6t $",>,պv˻+ǁ= ;PlOOǜ`yB~(!Jy%ӅCb$jcZ<;yMjdLr19S##C;d X.$CbTԫj-O-BٮmL,Hjhcgh6Dqf.$d RodCS)N󿍬T)CJ@h(" ZĹ_Ey?Z#=0ϾKdyiܓUP2IJ<|w9yQ_yom|j'okPG" Jj5mA9>bI(Uha-9*Z*\r(dr@֬ 0HFf܏w'f3cS,X,yE]2=هyefN_.~>9\PGɼt>:͒Mq 8c@h`10 LRĜ5  JQ9%spq/xlvluqL?1Яzb|3C!}0QqC8Cڗ ^'z)t@`%p);vZJ ; 4ޓdGIf/sVL 5ʌYIj (0'rJж/AyneI>I-2*m(H0AIY(Қ"g=.?lyӿwpZfrWwِ܀. z\8IWzZV_oVTs:=:Svhߏww(WK޽5L~YUOϿl%s1\=H\&z"z\jӡ`WU`*C%pJ7JUxW,^pUpU4~W< p[p:*MnVm%_]率>dS06F7Ӌ饵 3y$[o6J8}WFLR؃*U]D*LD ./Oz\&aLŮQ:%/_tcXϣqnٛi38O'i/8*nf³0 V<๾Xf#/Tf4p|v?͢1ۺ3~wk/jQ0-FyMYic*]m %[[2Ȯ ڒ:UZP@Sd > 6BESŕP,*}h0X4/Ѣ1#vpp0~*{?y pZxHbᰫ*WU]LlW,rW(_Sz~٤D?Zm!x(NE3!$Әȶ" |QB"H"9CQF-yႊR[K/QYn,O~_6|fALrN?y]Yo[9+¼ ñ{~`m$5%%gxlɲsı$۩*~U^a5uO,K륩2)ɕ:V%]1CVV[%{^d4e,.uэ[i9o_N1w쳂t"~K%d%$SKFZ) M4"*KR!Ko~X,?zeڷ,Sw_Eh$48|JY7Mϟ|sK8޿RA'uj?*HH׏0iпKzԺKjz{Ngmuݬ;ܿmyݝO4=z^h-M?ݞb}h~zK9|G YWV]AOn|WYiEySf_L/'l'&xnbMS /Ե1JIUJ@gqeIdq~<=VSpz3wD=fd-@]Xc_cWKGؙ[ֳL &GVc@&,[x̬QF-1FD3%Jb&DAhk9)\RY8%AR" ,æ3>76=M 3:?+L>ʰ|Yfҍc y~j!@~9}܆Dzԣw^x8T&j#tAp#l:!%DT;QƬq|DIhZ7{+~Qš$2GeAXmWсG$Oԅ2b'ʏ5AzZ_K='fu/ͬ,)9Iz$WS6zH:[I/Aab9,V_KG{4hڇP V0[tܠ1K#`HIiZFjWdڕ_gϓi=d x`2*!YFO`ȕdbC aU5U?:Qf ^3JKOrEJخ2]},üGzSgb*scx݅ՎzgwC,LYQ$kX\Ra 8mG ?z1H=WM>%O[#c)Ĝ Ff!.(aYt+[7^Mf\ J?1 :,i#e`M}0LAq/xg0X8gdĕq&n×SwK ϯӂC=dz;9INҡ=IxVmY!JxNÏW^+.t(E:.6M郧C&0GC涩iCFW==ƕ%oo:8R=nZ\ grd3>/uluV2X#k=RQ$ $R`}EF/ cv6*Kfx1` IEJҩ SRk' m:`-ѵ42'b,X0mQ0YBpo/uݹl&Ux6YsߓCvcivciesi j^_YIoȣP)TԹT(, `\l= 9nd<PdΛ) J0Q1뽍.;g)4Jr =:FmEzYdVZLtNZ9˘Տ{1 NyΣڑg n V\]"̒fQI!Hz34=Sp=k Ԛ_hzM˄}5$iCe矩R-\@:`T8cJ m54{'!yF\>@"w^%w.:ٝ&-FGHNAu>ߤ~gD LS+q,6sUdӣ??/׹@(s컣%-wbӖ i6 P^ix匠tڠ9Fko_FBAAc}ʖ8mਵ4UͯEY-ډ] V@W׻ekm !R iM)/wBah_}z}}B$B|3Ȃr-:bq<3Aζ|((U>a30xyC6l"U)NJDnwRkR(OȲf\VdIQ<@"K5,vkl:]V9%WOظ Dw{$ޒ>zH}s-ƀN4=kl A3W4NI9Zy myHފyLUq_cHKmaP[xtɸ1kN576MdBr݈ O"9+Y(D%Y,9 |4T[pFx%8 #m 5Ξέ~[z4%Y}d]/fyۡ'+0Y~rJ~MAJ0D] ZGļ%XHMQ%;f]I ;9}pуw9$2D9yn-ɒh'ʑn^GyW_mmvXܣrG~zwT=rQ}c+ۃ#%uIi'HH$}IA+ Be uh^H՛7ׇ9n<=^7l;i0OۮA}98.krܿJƤ"[+YIdI3o j&K4R/%ubRPV"E"Zq+Zg10g6!A@b o02d}͚عUYl 9pE0=R0R/T(EشFc-9IqFsvy~UղZoyF|d-ڰ +X𰔳gYAA&YMLt"I"Zx%$ bOH,(Zӌ܏V{ >EO+yޚ!b9*M\ƖO1(x@̨H8E R@'KLX ,V4"y \Wz%pA KWD"e' 1kf\6IL0'ϐmfpD s=quZl}G]z(jZe0ɆRB0.j8荋 ZIKiaJ /IFBq1KgCDEΗ2DV7%WH}|dYn d$88"t_tOiҦzwW4?3D[f490Z;\nsN `sФᑧ*tӼ6hj7yR~; dúz떁c2`œiY0G$_-|.(>A63Rb(ޚV) si>BȤS{nqb#8qxfk6dk;<]7$ec U8ل:ceV8k<|Ls}Zʝy4kD :Ƴ*.;^q˪Wᵢ(=m< s$m\}t@|WZC@UW3a4]dKcptiW|P5XQX@ Ip8i(3lֲ,N5H&]挴Z2n kӃy&疪P|Og%KMY4.DB)X2JQ3Sq2Y3;AoowIs٠,69.إkm߼~^W2ax=tݫ)Hܜ{2Km"xiA/M 1Dӷ.UR%U\7`9)ОW\\tpUZD Z̛^}fv3(Q)xD<3pY"(HjIʌm}f=7Phsއg @Oc}(`4Rhm'|f3)bxY?mRu"4Q9<3Q!5mGq+[w;V~~y9tfp3CVI!ȨSKt2"DfE BH&Ւ~1?G-1êXWN}'0$D'p"#ࣅ/1p oTE|Jgʚȑ_>#1鍘~X@տ~CCRQYL|lLfť1a]hdR:wW6 8|*[EPTRE!EJVF'#%a%PJVb{+}aoS~`^rUkZ2RL{I/y+bpсБϪ"#~|: 4?è` G(!Jx%ӄCbM_;ɩi&&'$ oNcJ"bQADErHZ#'TTbY,EGfZϙ"H)RAH]HׁQŨlǀ>"3\e?lJ:'Hy56FrF: s%ؘt.1+"IA1!vف@25GQ|ʦ8?Y\q *mXi(\Zj:5]ǰ4_TB_Kvgtmy$dN,2A}}1MZG8rkn6x|$5 _?+G:;QZsL" JjGmA3ƒbQT2Z5ydZhXC!)dr@Q]Ba!յdl:%cwX3YeHjaVXWA ۢ- U bS[LB v±N >%"Av'cb:EjC/{1JJR+QH 'iT. -ˆEJb޷6+p`k)FuǜtF5Ed8/e cytV_ƨWx,3xDKDK^"n,=nP't-N=f eB>)(cB!, u*BXd.0sbQjBPʙH196[V;eMZR0X"v6g~Ցt:lQsb/_S.J x5I  $@Y)kR9^.C.<:)UyDؖz '&p ܍} ~<#C(l8eL4DNԁcV2 Mihښ3I {w4xn|$1bQ;.B!j B)J Q`DO6T}̡Iz\YF&$]Ȩ#AU8lzgLVtW!@f.>%f9`_gn=M/0֠6y bzz jzX +-#ޗ|hl;&@QM Q5&w^4_ve36DҖHuQP]M1q>c2vy(vI(x`TLx"E!)p|O!_LͅF‚r@o O)՟(HSe}Wر}YQXvZHTaF2bv2*y&B?\tZ.B0rђ͡hXVUTNgHB(w;ΎW1j J}zy5c1.v{ ev~RDMܩpP' jgY[I!cdU&3t[̊J0'& TretSN^] f/>Ȑ5 ,cqIDdY:ފݝ;v~g}||5ʚ`go$|?) ^ (5&p"+0%Re$L"nN3'yųΊtK%(C"ǐki"IˏMׁn5]}+N_Jawsk;o"munwn3W,j5XM?I;!?XZ%4f55S[Uצ? iz0o^M5k׳G&cd.—Ḍgvo*gV;㿂kF2 ׎. ca/̴ǓV>]dJiLʠSy"-ǣj괅ôDmZ|1m8M5?2akf[Y2ܑ%Հ2=\]=نAZt?_ Zϗ8/,$?o?X减_Ï|}|?|,'>NMO_/jy%v54Z;ahøv5^3lMԊ>ԋJTaNml?֛s>Ԧ7G#:mׯ$a%Ir>, eV"A*jq,H;3$YJk[ҥicyX-jkt"+«TPD0,foVb~AdG*Rggחǒ3xVfp23g~Α{,nz,=Xt\zWK$$؏ҭYtx< E(G 5M64R-{4u}D9ȜoΓ'H $crt%&_5Y4% UtRA`95D@$PcM.ـ^e") 02vFE΄LJvSP{^]NFts~ EՏ{1t wYږ3DP*,E6"i2I2:)f{;"#E~uw>n ́2jMʍ0y oc/ 4Q( Y)*ƣ&Y{I@]vɑ{uVb!/9n e PxY AO)zOV8P8KDU.xOw(5>vø2>|mҺ4Q+cTdh<3nW,)'Mns~ݫ?'kn3)zsuy-um^ϣKڊR1:ԶQCA*#P@*˳XlR>?mV +V2XPWWVJ6"Xb"+iJZ*KF&Ntm/"*-/ȸ `8`@&&Em1#Rr`N$9Ÿˍ]կA,wơ_eiR!)%P"%DSMf y驙~yj&:Te^7" 1 r)T )%?F %w<"ŀ.@e`f0!s_91aݭAG<\ߢR餻=75칟eS"gg鋚<1𰔳].C/L@4i9MȠg92 -b!IҮxt=4!\{Qc.SBq@L@` ER ¸! gF7:A"A;>dt46+0eP^( mx6Yg0꫍\{ꃾԢgn?jB2HQmew6pUt{#+ ^pw&hGWx6R%-hPY 8*Ir19gT! pdq 0^ o|#=faC6w.sIǞ),oYn (ϦFtY?.1ad<'ٕ3ٛjkI(ѶuܲҤ*"_gMކu'lҍ{.BmPfMRڼަy#1K-.aįxuܦe4Zv`/{_n_mq۬ڪ;^a_$*0q6F+ %D҉L0]R 1 ʬK$ciWCy)6XFR;oGp tVʆjSL&( cls? BG$tI\Ff/m2HA BEҽݱ>n[: I>J"F! "&9't%-!n { H۟NzDNZ_?cfl?c[v[{&B8/ԅ_:"c @cqX׍. :5 nQE1Ȕrr$U6T1zHnz h],Ku֒=*X[cLསfk;@ytO)Iɶ6: Ky| (@)M\Ն>ƋP34R.jON&.uا`ɸ+!I#}.$Zf'()& IIrr7ܘH%9J9Wui3EZB>nYUIO%g[IY,~SU[;9ؑfꗤt1'1>Xꐿ/~rVx^4"ٸ\]1f%4'W'c\hpDogu,ׂR#cD=rA8Tt$b 3PmJ >B$hu#w;XTeX͜-+3 D!aEH{UJurP{c Rl$7KBd<3#,Ӥ2#6$ brxHTҀdf&Vvyx6S3#Zf%7@󯇲Z?u#?J*l='mV6md-mKjk,5dI/Esr&er,L>ڢ",|(ko={VBN11# q(%R49 W#sUIH2TmdfXTj=cW,,{V|ZŒ{,wqx^οCu/hx<<>/^8bK\EL0At9ha1$O>B"[>GCn4="\FWtP+5ب3i;fH U9̂6Aj0b$ICAjPP[ =0{7rKzɕ065HdR21#}#d]m_j0:R 9i0išXP@2("Q24&i*a5sVԯ}`<D?vEDUՀ"[B^9@ATGq 1طH,I3y4G=A3'4H*"ZSFJJsZ@.8F$X֒i.BYaZE^'\rZ/̀.>%.rԊW1,C&$tLj!D1 )J\<<;vC{/s ~ AamApCsE?UcW acUB 57Ia:k-tڷ_Nxp0YÛROnEGQjO4}}3wxk2Ǚ>_Q~]%CyӂEkbHE| 8k9W?_2 @&e )ZH}wc3Cz,\0}׹GhjCz7 zgВѲwR_-Ю.zQ%9dwtяd] Y$0 V$f#HvD eߣ\P"B.VJ;W(Q2GƮkl,7AוnяkB f/ `GLdFͦ塥G;G67\qV@YjQ]}iL# h`\ Lr>4Y}0-5kF \cb-<\+%•$)U1XU1c+UvJY`!U1WcbՇWdzp 5W8߾NŸӛќY3Uϯj)>󱛤DJ dۛ0^xٸ0ώn'@e<]L[zˏDC9zGIݳM~O֋R.]ʼ‡4'+/mr{okli4YMm9)+o*jq>,' 7eQ;tcv V# #G3shGVXi5X렵5lqSBkR`1 Kh-YסXk+-mL:?Hт+Sgbjӿ6Ϧb{sS߫_M>N'=dw ܢ}ώF:컭iȝj럓6OҭBl't/˟o u1o6IfsK[v^ֹ3@{pexqLhb,k1qAǽƁǽ:z]CJp@uaʌYgcC VaYp*zAh%2@nI) Q֥Ȳ|d"+!d2x/hj<$n6R6%=EMzNŷ% Y !u cD1%kĵ(n!'*x"A25Lqf @@D@cܿUulG̓:M|] żT~}*+}@#* @ EȍeT`8LD8nl_!W}^Ivh^B]?"+ 06_tw`9޾(8?@:i8o?{Nqpۤێ2|=D@2sF/BFELB:c!l|2JHada}CF'Y`AI 6YHJ̔@XL,ŽDF*Q* U{>?{Vj_fؼLfQbT1[N$|u9{A^H߈Pv~f5dp4בf Ɵ5~eXv7:out4MtGF (TqT0Ƨ(%f3Mۡc9݉OC; 'x9bdb. Aq,Z(!'EvHt$BYXf 0(.FJ< A2#9x'=4Aʀ9{Z3>9Ӻ'dTؕR*Tܺ+ )fQFFbe Жd;pY.YwY D@|.M5Zf]xKjSHm[mWNm˺T]m+ k3*/9`|Mǜ8CIH$'*d!Jy.XȈ9Vnk1K.HXofG'9+ 7冼V>-[2*daq,TedQe,ۂ7LO_ӻqfu\? FO[s 2oC)VXɘ #A[BFK$S1QY0eϣ`ԚMi B!n,%L)Za4%Tvq*R[uRC[DnENK2inutZkH L'C,bIBYY:jN&dIwLQ& YE#H#(zBOZ`5a5rvV{x*8T"tV7$T裧u͸ߎT3yC=$ r \6%ș!ZL2d1&-H\\v:Z42Tf\d묶J\|NȥBڡ&VF:̐uDΔ )I`2Fd+DYHTJǐ"GyGaߍ'#wZ_1+0uh}p_1սۭ1aiњO70oOH66i1` Oqn|" ' ৡ)@XRYRL010iպTĚpm96r]69yRwW+|d ̵t-r 3˧tS-!8\~=4Zt kL}Vbm@}09Uǔ(,;x59 \%kQX=jxzBa]0GD_$n.j4f.^(K4}dRZP4]S*RaߴLK]}5ykQ/RX5;&kpIw1N?+'n cR*laj~Ʃ}ɪ];;å:< fga֊bq΍J4iL^!wf#YFXfgvfg2c!DXPؖ-} 3L)yUxkٟd>Yy]':e{z _u={"6.IŰR';>#2of9ntR^= pEG~>mTs6_|_xw3k5/ܾnF-n?>k~sV'lV/!RSKsy h7ew_Yc>tn/2jnY Ictxs*HZ˓L#PبTM68MnԏNJ7ʞ{ }Gc"!K2OMbǯ0e+ƥ#cxk- ɑe[Rh,L<({#֙ʺA Qj+4' P*S2XU ev@z@p!̏* Wgiٻְiw9}d,Ja MȵJ^ hb䢬497c'i>VOw[!Җh/ӝys eFc%Yܫ59keqY+fNmnoO\K70jz4+ݛ/&F5se`[syx9[vH?Nԅh-wQmhI+ߵM͈p'2e w߷ٿOtXU:V7Ց|b95-nbvz+b㜟hzʱRiVoq,nw=eN*rT=Gc4zǽ8_SnKW _]{}IL'?˿^t\؋wOoiˤu=DO&O!1jo~޿iMKhZTlhW]_Z,ڍW؊+>BlT&ml?㈬j[hqyv+H$VO1(aR`NfQeɌ0$zH 5BRge;%鹭 sҴ)N.lfӐ=+Ϟiio_E2m⋪o S܇z|9&LgѤoU2ͯx٠ wrIua^5T|*C-9*/rW3H2ݫ*pjO\[SVQW?ZO YUF R?o9]ErԫU (pv+zaJtߎ]E3 Re\+ E g>L*ʽyUGf%IqbyW \9.A]=?%Ǵo%9%M\v7,RNy;%/Dk첶;lG z& Լx`hPhv2VҎo6%&ĞjfU--ܦªGN=a5hzrl:[HRb SP9c7fWWW .fޖd֨RAA!+'|6`,ċ71u,H׿EcPքR`bmK4*5AR鄡Ÿo7OeqlubށdZ[ay?4% æ0+&is4*氵 U09)~_ǫҔr||/Ÿ;(aW \|u?F>Ɩ5?$4z (U@eov@׋`ѩi ?|u#m]QZfbA)᷐6Ru`:XĿUSQH0 Œ9K˭.#:F lnl [uK]^/⺅v}jۊvMUk ?Xu6) ގ^׀"3 %BR2|%gS4aul&jע\0sO'>u=:* iݲ1S0fǷC6J:PI]:Fx5{oii9HA\ßūٸ/ω)\d&x?=BVNp;|5oB;Qjիi~Oc-l!`&f[1 cl ya }+wdvzsZXa}j㙮3lzE"yӛYHPl)w>zmոuMsp.O]sN6+JϤ]stvo]hsJ*m^*U3*@[;9jd/3c;dC.>v->EdZ|XMUtPOྋ]y[C}֡[Ɋi|BO 0doEk?No;ϯwIf8HD YDDּF*' \h C!fGjnZ{fe[ beG5@:2m5$rG@F٤-9gp9nbq]Ƕsq%Sk'w-%^Ja&z0RT:BsC:%X"TbK1 ;7X#QHC!x0k5f,`ZFL&Z&%"3a&NxV>" I{^%ýS)̵KViAXl8VrL2NHVĦFݐ= % 6N0+D{C lb"*3bgg7bȷLcAlq,Ψ}f4 H 4DX0شJX"cs_@b"}Oi7gkgwD3}I,.8s3Lp69G#fK=`&1gR|UCg8" !Vs= -G8"II9p+ !(,y;Xml.TJFT(B O>J:f tO{э;;fb?}(ѡ(8؃ŧ {iDQRUDHe"gqA-!Y-z%-JbX{`ZF"bA醷)q* qDwD@4:&UaQiƝVc 3"G spB%<|->5YKO>/|1.媈3TjNb>=MKY_DS\KSy}C# !BSx+[LS&y"HxQ"ϣA**AaiD0@ QD:){{l{,_N;Wָ/ɚO:^mؙaڳKgJ{:_ozE@>(/"^ !!-+p\"$ʢe,k9.ߠwm*~1i̒#T>UKeŃY98ه A/{Ջ7޼m›}/q9[]:&BχUx4?ݽijmдkߡ]HnTl-Gjy- m:pmu3w߂|MȏH+(s$SVJ65GVU^*%68OQѝGܓ 2翏qz׋Ja&3hJ*{S-:/5&-V)\{zZәdMgOl^;~Aݫ}Wvk ;`./ːg?ѶiF^h#\;LsMT)aR(f"F6|ڣ$h#~3K˵Z¹C!S%]!ckFC`&)?)+ǚH&SuGcKժBP ԙU ǦN=1L;\ӫÃn@ҩ" *E Phn*j08( MC)S|o* (+3{)7aUc +h>|7UTU`JS=Ikg/ܪ &RԹjr >Ǻ[1"% 82GD.[pK \4I9[cNj"?r7|;vpdttx5:vZ:_m"׏1 Ou^2AYn|_Y fGN.׾Y43'\ת7x'3ޏoPܯFף_cv\ߵ!bt}4oLR?ri 44{3|N;Dچ{oCCwӒ_/_#/D]; oxuk]:8r:ol>']p⺬[v5K\r5P)^ T>A!m;^b/͎|j~產TܨS'Q; iVM߯'nEY7O2u9>!τiE{*[1ԒaϬğ;U; j~3><.Ϙ#ӏYw|:89prFR]s spK:Ivp5m"k+wFw*vדf'[nfo08V B,Ft82~!͊tx4Y`ǒ 揹s`>\#b0OJrlP$k ޴ &dr$gUuN݅]q ?[dqkn'TVZF,j}#.%g:RQw#jgɫ[S}rwf["]9t퇚}49eҡYUc*ԩXRUJ)jJ[|jK1\*moRKgLٚE'YʾihOݺR+UA(eP0OзdRWY3ZDP9M){,r{F*FZ0Qkn軶s6a; 6FKk|D5fK3uZysȚm)rӔR=F$LSmwDnӰf@EbzhPQ-^rJꝼnh cH㵾HtYW1Z; r+KšLu(ιca<.]EBȥaUlS=^bC7*4)^+}tBbnѺB"3h_a*jU!m [/13d̉]șbq⧔G@H|jŻQxU^uϬTC*)IW6՝CQd.DM}6:)aR6 ƹ$ZB##$]Ϸ)=*I,S ;1ŸvҼXl)6>]C]S>Ec=uk, ;d1KuTX% TZnB v ٥fݑ#"L#mTȗh혰EBRXE)^ PTl@t[ x4sXRs#~R3@(f.*DcA}Pytp FWpN w=Ҵ6 k9`!ThS`EnT#{*%*zeԓYU%,R=!nAs6jBJ5)*L l0`W@,F>I:N)KP;6@;--V !hYw\֕ՀB+"0!7 c:o9 )! EX&P4g@;iש;".FI`_9:*It&u0 Blr L!ߺDIp`R0 3@ƅe<AN:2];ℬ @ ufV )n7*,I5 I5K#xGX)(`~G`$Ѽ<+Wa4t 5 (bu+%٪32rFƛF/eJ[Ԡ(Dd(5S eq6 o iD񮀵Ck>8a4iȀg9bun-Vv2c)"9Y`>a !Pl^vp'A"/rBI06s7TY+OҧMPdYlJrdJVذ CN²Iΰ ՜$rDNk k Pk0tq\Xi "AC}$@V)loЭ-xcWj9`GM‚I,TG7`ZX52qf&k)^ HlDž56hN^j Ll` Q)_rt@ 1l;w@ @L)ڽAX1z,D-hg[R+y 9z6^Bkm X0Z{& `8(+( d8.,L$&3", °*) sM2B@:QbTh^ An7etRBM0MRR6 `f% ֝AmJVe魪#X_SX@Hݦ:2 pR2^њ VK , qm%AhN. 1ֹ;qT}vp2Y!x;'8j]=#F77!9hT\Z?ਵ{ iAEܳ?i 'FZp$uuJJρ}T96߈HɔMZ~{mߞLK^BN?Hhn| <9> 8yg9zw`?{n1OEz).\^{jkGIO}-jmSd5U]&Ud}_jH /~#tF4Bh@#!F4Bh@#!F4Bh@#!F4Bh@#!F4Bh@#!F4Bh@T pRhPXӁ@[ .@A[ ZJDY}%tsz;$ϣ5i phg,D4Ɔ_NyyD86/OʧOg+G3޲ǶAJHq,+"DE*Q"5>i(ax,YL]x"R:ܻPv#P<(^W4gunfaz[ݓwb./Wm5!߯] ۽3B4\h0dVi)5Qx "(M.Ѣ ^]qWSg gyL8MܼIbbIx*'B%I_4^J.Yͧ-fa~} w$5ZL_~<*xk\pC!D WVjMJiF " 29#Ih#iH$"G3/q+s Y/e2c2\`^qۥZרОu7P\÷|>pʆ- ?zQ@ICl<7%2\2g)'&1" Q<8Pp O3/vt>6MZfy q{YSCd`8@DFY܁T<]wa\&9%m-zo5:NfqCDGcG+GM֙Ei^pWwo'}עg?j8ڧԓڑWpM>\JR #^&p׿s3\4SYp,*)ΓspQ fpFŔqCm]|W[{@ՑO(wnFϞ/)SXNg~M?Wc,~sӫRS@%&y:Jyv7%[Jɽ,Y[VY6Rs{v&qJz4iF˽pQRlc4OJb*™mnjz=7Qz_<"^Cn/nap6L|tA)] UwF̯g.L\e{` W ^j-O*9D3KJGl1`LXzHIYNg$Ҧu;ѭ$mk;AO*E=ؗV3Bc7ThQ#ioevMTmWPKd kn&Oz#fK^R6񋺨/#s%GITFb/m2+;ły ˼d_F9߽9m1Hf^#`UhiM \ݢ"B8K6`N !\%P%yKʝco\a/6۲_/Q8Og- Xoirvf=oC'կ&_&obm%7Vg#F{!oc.[>9-_5~=jͽunnqj Hn EQdZuuW/N1+zxP0 ~qXc9_cT7ƺ~#pGm~O}PnSv>|#RiuA5 !phaV`6C3,-5.SCw QXCEE0(ccۜxlOWp+Y\ٗ9GaRn]9.e}xl*ϷΟ;ԶfoXN|eU+~ouKsjyv1{0ndӅ>RUrK?\M!kD1!&jF8l9}yOݾpmڻ!mAFہB~0\Y t(,nI~e _nZ_f9k#Œo2Kj)` s2R& Al5$Φ#X^NLVAN!KF B`Xmm|teΥ]7r .TE@aK0u);fӖ;vf,E4D7AlGj䂔fp}#Uu17%Cþw&r ~)E*>v#>?ň-o1`|4[h%1 .V҉3E|g ӫ@bŢ╈7o*DOVU *!#Lbg*Ga\)ATAj}xtBReK E6V,zi7 =%O"Y@_p~dzu/t%zytj:i n3i t56P0R :AWCQ9n8,K)aW2E㣤J'cxBk|PĒFԠx3l͆en{Bl \~}Wu_(V 6x_re$SJ 4DhF ; CTfo:'Xq6D}6lIPʁ0Z[w02^;t;{b6W`d6W_p|˙WAxn7+-9b䈑F: G(NبF)%1)+% <+G|USRR23C7Ԧ༷TDY&1Z)M];+Ҟ~g5uF+r~'Ə&{n$t~k=3!u'wקtio?m4U]5͛Zl4ߠ]]rK?k-!ӅiKB8;7P-hG;Ht1[בX@?mx ,dbmE0HpQxu}¥vC4w:pkPg$Sh@hI f̸lɝ4ή.|[YkglMȆVE"dZ5:L,󔛝#KK/Cªmyvɦ9]P3@C$a`җD.]J*\`mlտ:&-<+VAwE('l0RYI/fNqfpqxʥeLGbTաtHEt;9Ák j erJ.* <'_^k~),fG-Q`,)FɄǑ$HdT&9 1d-8a'!f .Ĝ1$("7J$ (w77I$#mݲVQ%P1fA L>3Դ$g(ٗT9KVP5; $; 9nSaH޿[_[%;a=QMC>.}2(ܳ!1%übp'&(%YC9ё2W 3nM{>EO/y6Ƀʏw{/=sڢ" < 0$ '$E{Ӊpa^r"8~fz*Nbr|DmPƒ8 -*8JӤJcro{`pZc|ܛ'w).5sڈvEeX&lyQ3IC֡ސw^ţN&%%:w Ao*Ҍ(MMGWSՔ))UNZ ZJT؝v. p"yo8/{F'Fy㈯+տs. '$Y)@P*!8BWOAv6] zz\rv ^V}n)J9 nTjLQa< 9Մd~a4^b%2e+Lܢ}bio#~ ,C'*>sN 2-F *Aun(IR:1H"3uQLjs 25;[劄 M3(=&i7 }~`}n=M\\>zL]*\<ϡ33~<<Sտ/_r/Kk;}sp(wOb<~R5;cɎW2[Av9znUπpYG_\ Q8q݋6 ?8P|Z(\gvhSt𵗭3 /s@|jylh{N헕-+?; y?-.d4 6PcQ_]ՉV|<X6_MͣWcq=,z9d\~ Jv=C~Z~|u+$ENUjueե̖@ SS42]ycs) VJ6 =I| ;8P҄@rFu/2u2DT<(Iq" nxu٬;V.7/jNrR)L3*q^1ÍDs,LA 䱿f6װw5oS-ٟ;7- ޶]֢ ӎo{s3ct=;opGL8l$RGeZ:!2ӄ){8"xBKKHE*'tҙ21ɜ̩{E)-\7GѶ.mHiӗv;}]v Tt\`BXC#w&qO(!$Rhj] j.73<.99"6T||G ## !V.RGEmJwo߷M}=7?Z|r~>lC)>*C;!ff lG֧ߣ [+ϻ![\-5]T@xͰ-c Nu ĆP.a~M).;hs(3\DVD])ܖ(1zׯ~5;3~-ѐ1vdM:f(y>3IɡAst_4P*,~[pІ,_jB;]{p|Ncxw'vQ/ڹWӲx?C|-?z H,hRKl)K i#Skx; XP\p<y=12TG)u\ѨTR O9)BJ `H6I@ZZh.P鍧:T2XグB*XaHFb:v;$W@1w{}RpzudȮP93UN8;JO&v6 c,})uBXK!,]Զt 2"e=>JnaăD"H< #ӈYZ\KՠDZSpAc"&  V̹WCժ! Y]Pln ąwW34l4W-{-Z6}"4(_DKPp28]kv]u>aiv/Ku-,=.lr}9&iݟ",qQkjxns֩T :a,PJP>EJBNAp|@`4 3t*A9CA8e(zW*$ƠS6͆E#jDղFTE#r 4u8kB9@ٷ JA\ԃ& 4J(*J.x0"'̈́NS2R{ԋiCu%ESbы/)iDQLKBIBB3HhD#ׄ!"'/zqzPakܱ>dm5=rm4(Z8} d?)Qq <`IȞ <\[@IԪ]sCSQ)aO( #Jm*(9"u hU&WcQWZs*SiUQWP]1MR=uGc]ej%;tuTר 1Q5ѼL0k[~ n:o޾3D:to=^ UTe r37TF^C;C~ug\ӱ^OY\~ !l>wF "25Iwu$dӞU>{s&|0)L;em(w 2uH*ORHJxs"(-x<^ x/xg9a<+|łX bW,_[G\+N b|łX bW,_+|łX|łX bW,_+|Ţ bW,_hĂX bW,z+|łX bW,G([ nCNW}ZwtNQ!]3*z2T4cDs*Ka;.ƍ#qQVLdhISpk53JĤIД'4AkA90\59f5(O#J@M*Qϸ h&ٙ$St[lpAf(qSf[}p n&[3ۤ/maگ]c>\ug }f|gш)UɄc.hr>t,\P;/ !"kWwOҹƏ0\8hj(ڢН]Յ9ZڕpUA*)P9UTۛ^WmВUbUPY-E0'e#YWp4ux| "S{3 Z1@Kxqg\/ e\@D;No|B'o$zАiZ޿[{Ϟ4 i}Kx—^˱֙=:csPkn|/l.}J)'SR  fB y&JK~9tLe)-*Ϲ֌=xHGtˠ /Ts_ gUe]HUut)Bp2[?pS튏GOmq b[ڴ3(sذr=x[^_^`b-41M,+.q[@ޟEC6i NUMb*Kv!e &%jIRV2*Qh %gw!h~b1W;lBY=Fd{\yBa׃]S?=Z6C!Tnt9~fbJz_?? ^(V=2`1(p1AGQь; oURΪh)U:"H3&OI w&{!UZ#g/9 Fm׫_{tO1pWrO%=KApp0Wqx%F}+oF#Ԋb\QsXwtef;YST͛]5Z?Ņ Z^xr5?HJs{OWGHcބ7'4<?]\͚O]t^UvZvKz%s+= ||eYD/j|칞Tn1J0+㒩*6C;? s~*?"%?5STC g+Z% IIй( m4ԛDS L&oQ4LjMѩƉ ;3ΉFMPJsuNڀb| zwÜ`۞ZΡ Lkm6T8єjP2Kj}ATy*c6>n:$aM_#+tT5$2P\J(/h8GXIQ E!+Ԯ mW0N[LFmvW(D1h(o5aDMplp

AYoqggw{9u8[PpI=[M>(nzꀐ]EM\F [.&XbɧT}/>Y }SW[J~^&ZlY NnTC #&? Bz6st kFn= mAp,]|}Npg;Z18eTcu:Ȧ^jQ?:8ܱDrwNVvF5Z])Y?t~v+C~ֱ9<%cè:az &j*mW U!>b?ݛK?xwwo.o?]y2sq?zp]Fu>I V>?K3xݼkv͍ؠk'A\9~z_Kl/\Z!@ gz>^n ]>ڼCoz+G;M!Gb"D <S ĉb,E7J $L^:0U.U-=bxqoyIvhFf}Fb$MV>("5S:Z]`!:( Y+g: PJB[Dm۶';3;PnLy0蜰Ĭ*i;Iۑy4x~K

.Hhk `*C#x<0 `7Fd\gGrrm%)h p:=1BIt106A[7ׅ͠o&_yPQ ^^kaJFi EQtRPNh%&HÑ2e B2XRI)508'6$G.h: $gϖXCUF8q3/zۘ(&[$#ftr.YK2P-^ʭPx^icX#m)K٢Ix@:I{/2v@2KaZ~q+ao5.ph sD3Mq@$z,3ㄔC.|!O/ ު-?v)N׎*I*$TFT\$Ԃ:o1eO.8R;::9*}NT. i*, a^s7%ܶ]޹3b~OfR^C>L]+7sXk{kĦ6 4=9 ?=h5^"%>g> r@9Z!.fgm;Pyn>A-EmX* *fd= mEg.!z5Zc}6v|Tõk_nShպGwݽp=n-ݯ˶7yJm;EnQjףr01o;b>jf :rz6[ d)kXQ$JX NVm"-4xfxrM=2%pAZ!p ڿ! L#֊CIyFQ\wS.0w_@ A-R_L%Uh%{upVzYllZ@fh啖h8 m|dxLS/D 'D1TRM 5//jѰ`ݚu61g4Jљ/]_47<;8V]v59{؉,J՝mJ3ss9 6qG?K`cbkjn{=ƃ98gmIaaWjrW@IYRX]- |-> iՄSj]aırZ s+ j7?߸WdܰW{Sq>4\^mJpq|XaXx嵳= b49)tl7ddjJWFz!r[]•,`) #"K$A09v1}l=$mlyzwxW:JhiM \pR+eB8K#DP_O{95=wy73*ӻ8R$k(0.s/ Q_X[w\c~w=Rk뛽y|5|wp8ݤN!NvIPf7]?ϳYczuze钤WLRy.۪ݡ 2CB֔E2u=(m۳qF8)-|.q™Kr7чܝ[ c@p3qA]@e^pA[69ݭVlOQ[ͥ_ͬL* Df!#zj1!15jby]AϠ\ljs6fUA> y<\1$F>Nr r}Lmuqb桷]1x[<[And_`[]e( ',;GWy:\Y\}p0|M}H$d?W[|bgu3rXiaA+FC5ϯ\ϐ^-U.Q_ C"wQX$U]UR^ ;46Z{r^h^ ÓF!MqY:0 P-@$8H_.6(۝UhrY/imbnj0(^,[߽-OK;ZڷAi7YvxY-=%;Tl:%]?ys +67g61@Nk}o.K;f>ZM|f ;_;X u%յt_/gFvG؅Rz:2g-6!͢I+5RN(N}̐j3D=5] z_wZܽ%Ϛȣn.]WJ4\U9?|a[ fsؚ}.loʶ'=iZyk|sr>{5pKv>wS,vV腧SK9vޱ{s5nsl+W:"ս9K#G}w5KlbSR!1EԿfEYXrzzSTDhhB9uZtˉa*BACe)|NWgIvz:ǓtBґ\# B *&-Oc B˳=z%YjWG!76o>j/*AV.FWi8?)q_fb)#}$Ɩ*۟_F{R&H)^"W b"HxL@Vh L$|YC 2b 9 e24ѱ@CD@ pɍTy ^kUj:3jքZtV~8v\f׀bJɶJ遴*ϧn&.(к}HZ9qGFۨݼ~,EQGzfL3 (%N+Pv!EJ G7s%8^O0OEFetbH2 PEE0) SHTid,6ǑR QƶX( c!;a^p9nY2i.Q6 ;4 onh489bKʨL`@" GSVI2y9`8˰UJftBݎԉªb"AՌJMq1ŨPP[P$>XMQR3!VR% !TW'2*J,R #dY>I(Q2̵bQ#@$A1T(B1 ?p8C OstZ_f`<D,F"*ꄈ'D|r u\pHׄyrϢj&( OpS4PD i"5 U\6)M9hR>UTThB@M ͝"tz{q=jlŨd[\4'\a˨ǚ}u}F>j < 7W3n(PAp` 6(W-X%Ucoi\1uU 9EQtUheJ)e5cDs*O{wH5ҐHCDc ?s4%b4%rAk\ 9ѸnUbMY&4RTV%D\$OR!K3Yl+sW8\pw9uɚY&}n 1ݜj2^t!`nr,B*z%`̔wY5f;VVB_TXLeU3kZ'ұH(ڎ:ԙLIJ EE(|LL$Q2h$V e"4lQ>piԚ+#ƅw=γ82i5k ,fbUDӃ6c͈GKsS*1ir8IѨ AV l0Qi*8a?LޮpEJ7gLnِL׃|KgigMe#k%@/ZrfJ{lxk\Ǽwn Wqq][#{-EA'jP᪊ccf3˥V95d aP\r~s; llx}Gj-~E-`=u])RMo&w'\axV6{)(Mr?{WF$x#}0ǻ/5 {Cb}J)R&)rT Oq5(y֚驾U],30BvkˮLDŴ)FzcA5*UNt--|ֵQv EaTGG~ɖTI*OŋדE`1i=k̥~[c׉ n? 0jj .!^_b ևO0i)>=ٻM6tN(Q[+Akvֺj󾎓?LHm8*\C6YbgRͿ(U3Ub#*R%ہ7vnj%T聓yKJxHaT~ GeL8sU*7tUJboP\6߀Iߟߞ?ߟ&;-yGI]4t>=G4^UcVPߴjXIATz[W#TE,D/b<#x}AήvܦoU(-H|}2V `\ЎĽ!Q] 6r,R$ )J@*! T'#Z0e.y{~z0QlɕiDV iC0(xfa30iNg&Wbpښl~FonBbgbg vl"~vjiӚ-(cU<>Uv!OkF!>"C.ؔ9n]J5r IcQk@a+Pz*Ź'VACQ{0rHRIpxaR[ERcib A_ki[;R3+ 6mKZAjN0FF3cy@ (3zAr<;%Oy3}2`i RpeߦiVAJ!#ȇG UUH 3aM:ȔUExlH%1QMKc5oAvVՌgcRif0vi5mL&pĄ0eR0ltQ(/{렟]o ښi+:uC- qύg9kjr.HX>]ȣ<@rFϵ,wzMBR<*CyvWn8M.AqUu+&_/Y[#ͧb(GKu73%lyyF7ې%d(lx ߇$˖A.|zM}97 u&EmwP#m~q|kz%XיwbZh9lf j%1qyQA'T A y߽&m\R+R3E9AZaLv8eਵ iUCYڊ6/t.=3еxvH뉐Z!w5I+l>K?-@,/aQB6ƒEmCb=b- nNj{~Pń =eȈ E6"gʤF8븴K2(!t8M(rneZGEd$`JREb"D4XZkU.bvic1̫@B1K~DPڊ`y"裄G߼4N"k|V{.Rl# F`˙&ι/S-)A ";|9*"ohr+E5>H)XȐ¡D:/5DD #z!5#B &R2hA #(-eLDeZsZqh"" NӫQjnͫ|M֍Kss=(TgޡC0JdTT 6or^wH jh@]*kc&P2`sglrD`KFܯ팻5g^^> Xg9{lOak2E^$',32}4g%L$ O G>@Ie>xU~NE޼OޛCуG`aG`Qx\s8~ǵvzV0}QL -Kd688}VK͙o^.EW*xPÃtm0owvhUtY/[^*y6,^û6oݼ_7mqV]X$|V0۬ LGq|_Cٻg%.g^eW+fAOO2Xz4k*^ Y!ϝ^#i_V$7Y󀶛y5w•eޣ!J7xZϖӢM=ד2rCNxUc ֩F B}`a7ŸŎ+o$E #G r 62%#NI}UR_&}ylV@@K.Dj Cu |4r}>A<9d.T:KUd0{T$nTQL4!Ůe?`~RjPQ~:mBtہhD}m R_Ok'$GHn^uyYKV4"f (J:Nl+Y5|X ȭz]G9GR!uE((p㼈{Ll~M&Ӵ@~-(l*rLjF6%g2^zN|~=dhMGR_/nod=.R{nR2uǨF/ fĠoMSYeRy %oLIT7▱@8oA10 f1grH8+:r_3j/]Kt93c (wp)BSϘ8P=t"igQ|~6fu+Ky"dap,}mt " KG- >6-'Y5 rUk/vPc$z04=Ft}3 @]' :43ϗ޷c33-#rlAxհ-b'| T`|ZLa)bNp2HPAH}RX,sGP`+i w90bه"g4G44ʨN mdXS᭥^sʈv!lUH9vHqTQ(Ȣ`;rX+H!m[kvu p|Co ?; <= ̏ RNu<b>TT YKtJZ%:kZ%:kZx zn-Qd;KyVB$H@pT!,U"H*~*y 7޵>q#e_ݑ~uumr[˹\a3H-I%V _HFF;R\xru= aL%Gzw܏Z.G]qFrPf$ʚFΚ9KS{Yk=ܞl"pCߡg$tIm{ Ni>]Oȹ5Tf;KϡU-}bPkehmdLH`3O*V[YY8^:]c*ߢ唐c=Gu$gu*_u,U'v-Dv%)Mn6ǣ.wdҊ;>ƓFopA9 S:YF]b)t~Vb.EO.Q9LMwA5n bhJ7;Zd}լzGL>nvY5?cU&o<-^S #."Kvj"9⣆壆[:bBj״ϳğT?KdW/Au5]iV:|GF~|Y?b剴i]qqF7?Uŧ_T59;y.gMخ QJlwV21Sl9|<9A]T{Zw.N ʵJ1]GtFi4ZEoe>L3/X2Z ^*3HByA#Khh.7Sy A:yb*#ʐhR s !ɜ!رQ9X( uT/E KG?5fk.[ud`<}<_Zeco>e5{_uvW:Ibȟg?EB\FU+ 4CmX=*2c;#KǸ"I9iSbhZ*3oVqnYr("䂈.`:k9B2B 1zr < Ҍt ;#gϣ̝*D,n؟|偂 ɕõJy#Yl4w>jJe"&,ɤD3:n-;,YDN#>NϚH ND̊{фEmJSB{Q`ɶclbQyfWWܭxk&hθXv`w^h5܉K2sknlZg"D檋L!|M) $ƠPNnm d:jN&dv$ђd2(!H:>D'CB3rևQ-X4bgqF4kDk^#`NHq {Ix2!Ge<0ND];Θ$5 r V\LB[.%2XR ױF쌜=GzՑⴞ 1:;CbS/r@IAX duM)e2%`&(Y`"K 1z!θ@}щ>Pa̹^Mgxwz?c8KF>G?>QNt6J BkrC!N2%VGko`Vr;>P>xG.w>܁(?mK*,G+Q-YRL01YlLTbǛ"Q|p#u;&(#4W]L)c4f[upxwffnÚ>6Ӌ>Cug[uD1=^}"j=3:!jeu~m0a-S<>T~" cYV`,; [²{ŔMєm698"iIdR +ƈΑr' +VbRMA"Jp\b=JUbt .. 1pkI$#JX1?soY9dz%x쳕>6E{"ݮ-',=_T}?`六kӖ4j=ҾDeJO(Ҿz|5yv8"8'QmSVZKKFt6;eP2GAG.*~;e!rWR+Г}&n4s6ܝ\Kz6}{~|޿-WD$mZg"mwץnzBd)t{ Kj]Sfuݶ=-hzG6/AܽMԲwkyoj|wsceZnj^7;:ZV.66n\LnL$6Zdnw5>Iucħq%r |Y$+u] }pX]7 KٱLcca#ʳ*;c}g09ȴ69l2d\w&bfJefudB0"iAvvB[ɨB0)O*\$9@,Qsdڗ_:i7hwʤ|rhi AW-i{k`kV<$aٸfbH5_xuz~}Vg~8a})-/tVWt?a1Y`8-ÜU?u>~jr3Oի`C2" *}qд|%BpG@q0%>`'!$LxW')zl3 n)'yHa Xku\uKrmY-N)7okq B6ީxU]F4-|z}յG57#.'iV4S+uZ/6]>雼]oksŵF6.Omrm (ĜQ޽oVڮ9GxeQw/;sؖ[GRzHۆ[1z0p4n`YŲGëBOWc3lUݣ.&mmsWs=.d }2q8'J촁TDMRb $Zi%i _hƓ6k0ҼަS5ĩ?SU\'!W/Uoݏ߾}[.ۗ~/h mU'H$>xA涫iҩ=z{+MS2_Z,MVotG%mɧ2'iMﴏOgP3X)$i0) ekdFQ !UUYFzlgBԍv#&YIx%lFEʖEk"0" >BN'g:S]^KVx⠬$;ˬ*?xΑ,ܘt RJ0a;+7K=t02j(_R$R* x2><<셔# } ϖ >aZ%arKl{d(YJqBB^D6,1+&&X/e@tih9#shNN%9{@W ]i,L@dB%8Yk#pyN{ Pro uvr<$RgJ{ҕFkmK)h f ifiKP? I]Xɭ䄳9B hIrua<_YUtY{=gj{F_!inhKpn.t;Xj#u\ZťFVE 5rRXqTQ( "xREbH2dg^: }m[4nB-4S#37&f3vZ@+|6/FD7}%.+RgaQPQFHwƆr6dt+w84?O9 98Y 5Jgq KH-1 xV/۔wSq7x>?RˣzmOqTua% \Mzo\ ł#$JN1@##Ţ@0RLXOG BF{` AGb"-"ഔ6(`03iHDMyCeXHS]sLITϒ8%w'Jh$*&n`YoO  !|窽NzS~*;vWj( jStJb2_蕁u}ZAo",)ʉј"bVjB(u0Ip<3^{o|,CO<_cL{YL'ݜXe`Fh"Ԁzɂ<\7nz-gqq* $4TZ9^(0^ Bxn GeDN`g}YGK>ɺjZ}C2v "(g7_LROypøRsɀƠ6׵L:.j}_orۚh&+f[ゼ篳YW^a[5DmM=gJF"B\k? .9]X%5"OUURay[ \v|/YS!/ZS5ތK O}SGփ1zfzp"*_K5F -4Yemd2pH5s x8ϲ9=DO%zfvȐ iW|I:@A/.IJg >*51^x]Tlѩ+?vO`lTO>I/߂O:T/=}E]t2"b?GyCQG*'&{ܻ^e,@/BԻpyR4]^ |(7Voy !KA%Xt2(X\}LgCرH/Q,Kԙzu*kTgIsvGf\*tj,i!^X2ı%{37nȡ^+k)&U6aIGM,+&9)g\;|=ތ$g'AǏo!l{6fW7}m".D+E\@`'%gqI2|$%#.bąrʔZ%]ДTX7\V]a +h{0[0[ ^m)_W5`;ĤDDNA\QEb-8Hl0:*u/f16Mתj /BS@Oqta`[ mn`sUkfb|'c]hW܋CWR^x#:2_Ue,ߊ:}YR𐩨bA s7 Œ9K˭ɉuXAoYsڿjQݧfw0b *ȓK+z6\&$8Ov䒔KkN};GsvFp+ \%iɧ$)+R ֔ \%q9J򓇫$=\}p%1%A` ߹*ƺJ zp7 W)4ҥJnwf2Mj +/Rwc?j;!O$zwv/qy8Oz@*e5"X:PcZhup)sYMF:Los \t);TOiѱ, lEICrׇbRjiYɼ%h)S}%cUƧ]I\%aeĎuT,+"$1*9Dґ ,,m)=R̔FKykH.)2LwZ D佖豉hj4BZ"2 dgϓ[!8VHǃ3'Bo;x*\JJ b 2f%c^(KHJOXf Rf@.(qCvݒPNbXXH58'jWg&_r*, + >p,܍mP . oH 76jb(Uos9 #{fQcl\ T FBj"P21PG ^L"93@*72fg72Uaa6芅,3 T'˸FXuymۛ~p> Ǐt#6 +0Mtiꂖ뉠Hx|d( -L !9{O#$ؤB:`^e)LBDN̈M݈m#b jgӎSAm֣vonx&"1PL H!ͱ`J;i>:pWE<"X,{By}ȅ4:r "  0rRA+~bT1~tEDE="nx# L 5ys_S5c; 7b$a@@e\eEDrY(ɥzHL8cu4(&LR#1sXfD&nD| -t^bٴ+.{\|M\Ĕy'[ᣌE0t1fCQ<yc$'qx*xM;:! lGѩVF6/s יJ'Ij)z0z7ΔN  6)SOGK(S>$Iyx|G~h;ؑ;ɒ?1)E1cA%XyÔfI+&GpXnjFC*meqc*h%{gq0_ͤL1K=jk\ >m˗Wcwօȼk[|-Oz;(zW wو7W1?zx!$r:t՘!$%~xbirk|=O?(v,fp4<0Km*CA ƙn;}}3{҃{vϒS4ANpNt$"&GF*p< )$k)X:C!fGjd~rVk1[,} *N.q}2r1eTgJjSXڶWEmhܫ:B+ 5IЯ7?lkLk.3F٭%=Ccײ5SP +QT&R -Q#yߙS-fr'ND"r{6DUEIQI#q7FyD8i*HXG"\JlB Oc*5(KiJk^jFٻ6,4e&mC@8II6̇èS"6i[Y%6IIM%vUuW]tFΑ)l`R< %CÑ+_ izî&Cl| Œ}eGED8nО d'Qbj\2Gg~-,3d,,0u˔PSkOhLwH4eۏok3&߇V\r}!U3|]YU<)}XR$OV$ "aX) SƎn/90yW  n.:mqkBĥCR]*F[oa݆]m||vPϫܭrnI~ އljJ!R(_7O˕{OQeGQqjC6oM-m |9Ycu g[2 /o9խy2Wuu`^sc> q4֖ڮGdb^~ o0!2 #ݴ CZ}ևOi)>n.Vco=JJQI6W y5"C#i`YxQl$6gS薂ҷOOaRmQځy%K* 2|%5n_]EI?R% ةWSǙNIBwWo߿{ݛwwݛ_9x/਺h#AGcH=;COeWCxT#&κ^#ƥ)q5B* _n|)By. iKY-H~|ֶ6W?Dґ }:VL*.\ЎĽ!Q] 6r,R$ )f@*! T#=.y=y̳RiDV )C0(xfGtrSbz.ӝEO.=(MCovՍK143;s 6&JfbBt2 :Z @MG]:^zmmI 9Zrt=˃&@ڡ<&7nn;.LQK_íoDmi:j2 Olp}*s*Yz~Sqi~]l-M#_ t e8Ct>~Cyxe߆E.|wo{ջ}~<8nLr):C p&4-ϠrAQipmYdDX;F7"=J{HLJJ*3@*˫0?yM\V0#)6gF\s6;˜q灣vn^.~^=QAk t?F ]m i)<}tvXKl4@,/aQB6ƒE ,=zZ'qҬ*\X:r*&T0)FF)i=SFpqiYdU1l;I9@qTQܦl:2,"sGPJFR.k%!RHD#؏3rnKϠCN}Rrv7])WWhsod.*`$ ilrK5"R{t`+Օ )PQ:`*$.rIg^Yϱ!C" :} Z-B$XHN׌H# b6Ha`b1h#ж1bg-69Dp08CMi=qڵB֝b~]ϼz>?ŖT VqYJ`!YE8ep,p$A6|D)9F1Zle4wFm6*GT K jC븞9;qgdzdۜ7^,}֒ޭSTť^|R#pu2⃺xOը͑MRڝTF:L1[EiPiVc9j2ނRYdL KHcЌrMQd;ŝ2(JeZ:$"﵌&(Fs+%c9Gjl|v[:󌇹M?5=|Pꎓyk{|~DB-%oyÚ>@]ק]OOX%M^ҊTĻ ʧ&]^߫Ȁ?.&~sh]?Ϋ:]9Pchf!̡e^ݷwz^5>y]ZIkyߨ1{4Wuf]V֢,nzOrk6o"ۦJSS?^IRU_zts4}?L>`<~ b1 " >5ǓP*iy'9EI@(Kd68 77Y1Rz}5MÇ}g)e<šuMmmH Z{VpܕJesQoJO~hq574/yMb6.*{ d!Ǣ*E{Nz6v-I&FWWO0!T(i #?>^waZ&pY΢E^GqЛOcZ$TQ'¤6Y)ۺ-?mԩjuy09tҲ't [T:bK7ڦFRܓn~֭Jke-e¤& <%a4UTsc%bDGJCE9"9A`m(I.x/7ATYָ7Nog[E*/j NE*1T5{EVXBDha鴶@BT})C霜BÜB mFݠ.m' z[kK>F%`DIp)WM ʕN(̣X 1)%I$Bf^0Ղ;\HăRڠS(`8i YTH$&nd =!&_֔!LKI hZ(bi1g;JaظSRtFV-9IrF3hwJCH]{7Jc;EA`ϤZ3jz_'<(Ĩ"N2ta1$)R fS3XzӍ[뢧<ǘd1\LeXe@!(HkP@DR`)ɓqﯶ҈"w EDU^bHתל0gafwbnePSFC\xgRR\]{R L\rOVpDY1F|Wir-cMW L78Ŝ9s˭#:F X/}7Sz YLk.PHx3jg>Oh.RFXp4?K_~g*;3k~&=A^¯cSEϻAK̸k*˶z\v+|g402vX\ik"KINbC-YN$"⯊U慠>0Jx:qXZ0jmGEr7\tU@(昑'%A 2-FUƌמW!8X SG&!scc IJ2$!=#:j>1(S׎D ]@SO3&c\r,WD"@g0CMRN4|$՚dT85P |>ʼn4@OfO.[JSD ZiOQF)D %AIIpKR7GŨGV-pbwcrsڃ7Ɨnc<\puJ>CloTb$2PH3DCm&y,qo޾EM^p`ߵ2xї~ *-37FPU;Mªvk%P!p &v`wjFnkF)+_u6HznN KÌK)Q;(]S@# @,%82XPA꘸)ՔQMd[}b'$Ƿ7]t&|) S aQ`Ȝ: !1Ak!/f`WkZ nQ1 iOQzb]+[<>Rl.mWv(7?`!Ǒ+48jyG&u<pДw7)oΞX5|4#a)d^A䣳j*qHF'_v\(yܭ.N9WBޛj)EYTMUCдY5tG?M^F1⠗++6VZ"k)SVl%3y3F9# cFP"h'B$F* +8ew51M4"TcpruP!"e`T<j[DicglyVeΩ*P}9GAozGA ^0/N-6Drj$J}s/hLQ*0uZv(2 8#IG4ǟ%?h>"4_8"-(8Rq-,|08]o`G9+;J4#mD%.\_Gv1\ {TIVw^j0;q]6mx+q?)jͱii(%V)["%Bdg Fp^9HJ!$HC┡^#R!N6*+Ha*5cgl֌atag.BVt^uᜢ&:+r6kl7%Xvgq,Z[]T)g$M hjp!y4W#2ⳄҀBvk' ʢ()Z!E1zʀQY۱>쌜1gaY1F?vՈбFF\,PG73h J?PK(jȂRwB)(QLEyPg\p^Q{4[E:ֈE΋ڡ^׭p3.U/^<^\ E90% aP!\\} `E!%=ZR1Ev;刏NMAck|3f |/4}"D?>S#H4N;~lzzfZxGk"uZ5Q︧xGyT]?yDQ;ICDΠhTLI(3&zr?F]SkQՎFxD2XグB"|%!љ$k9[$L1fry5lK /cša/Uz z,@LτWTKܻdYAGE5(i7P3y-,q$ooԾ4chZjk? </)v`#G\sBIROy9.IAEPhMeX.Cb[X%_3'|5}j?fK̪A,k@k[osM(TsJD4AE/D1c"<ީ{CerYP Xj#9@zI.Rg:EKB][oIΝYY߀IZlxbܷ.ab ڮ(Xbb%J;ff9#B V;%<4{+X,b9O!_@9r+rƟZ(%!\ iA%Q\saܘP[sj[Wٵ6.>y20{̉_z_Lז.9Gp篑ۏ!+FIȇG\5 [9 ,3[F? U̻pп-x1f7Y39аrTF֏:QWՠ~^uq&#y`Ñ]_c5LUVVéwNw@+[W \hfVgBLScv +pt97|EC?S̚r{.PHJÛ?&_>wm_&f"0ˢ(-(epVv`G{N;﫾ƊR/5JimKt 8-%L$)H'v=֣=6eoO%:舣4= m-ś>XX8ok5e'L@`7jN8m{ȋɏכb5^MX~=a4{3fWgߟihd^Ǟr54a_T ߒ«}Vw,-/UYNkKwa >3މxH],*nڡͣwyoԃRa~g돪_n曟zv,&jmVoYM럲:M@{<~ަ'8`5f4^BN>Э߻O\˽Ub 'WjFR %"O/0lВzE NBnGo-P2)u7h6w1~:>ﲛqEpߚo -vmE;V~*vՅ4RWPpXb#-Z0? h֥QJGupmGvq\?SZͿI'7:?\0^}zCq0xi5 rsudsf~<4|=%oh:X)ˌ,YQ5Cp3]/{36Nncw;\V=2lU~{}ξ ͟Cjoƣ1ar Q/WDuzlϾg>6lN_yB]o^X5ztkJYQD֋:uY.?xꇆ+-Ȝ,Pxip4B;b%1XeeYg ҉|3 yut5OK(P[B(Ai4R3k8wCTʲ]¼EPuBz}yOgkc>w̞^gΎsUos.c6F)J}ߞ4DC(i4%|XB"^M]v27F ;l^R $?_; C3<4xUD;'@$%[nHՠ` yA}~~ڭuq U`ͥ"&銇2Ǥqí,&A Up&tfėsW 1[(0LH_~M'Q @odNYNlF͹4焜Rn0)~B))Xk;M#\.SiD+eiQꁦ+77eˤlV$KfNaj GVOd7n&qIK}Q^YnH"W'O~mY)a+VCufCzu_Z4[|]C> Xec|ޏtZMbc4~6Jz z/Nd#e.<% t4Oe0]r͹d9rIȭ"ưBFD!4ɚa;wr%ɁL]. r[2tE<B~iQ i4@pA2 &jwQZ50`C tut'DWXCW$4hmŀw.]= ]ie 68M\NY*th;]!G$]!`m+ k*thOW>HWVsRrId dlWV4+ԛS[?e"@#WG٪|oR-T&<ɾy3}agu&olZ6)φ|Vizѝo?̈́e\E7e;&ϳ{99;y'faԥ} ˩.[|TZ--Ъm~Z[Ot (B[*r%\3U#4Q.ݒ TZG7a#;U+8sHNk6im.sysVR҃j+yhQ\u`CX;@Z;gv{BT3St]!`%++Dkd QZ13 ع`.m2tp֩f#+.%YIuҧ`jF: u* Zl vno%J|~j8ð5&=)lehX,;/H#5KInr܀h(-Qpr0F`Ey2tp]Z{#2JZK!,B4#2et(JYAJ0%y!\thy+D)@WGHWJ#RRMǣ>G> KWr>@WCWtBte%< COhP?[dtЕr)!kXf\y`vhZʶ+;ծSORV$DWX+ ]!\+R+@k;]!JE:Bbrt•*Bt(%]!]q[4lӡ++H2th;]!Jm:FRL}KHF9Jo /io%{]niRuѡ@J2RZ-SNҩ֪cbeZmEXVLXY6 `ٱ)JPaMHC&;9µ&MZF!JM59IA-65x! \R+@˵;]J!:FR>GV B+K+D;9v#+)'CW'shM PX]]U9SBe<Bc+ :!B2Uɘڡꫡ+NzNN Df`lj W6KK0tttU?;.SOF&DWXv3D+t QJ9Zs]`kh2tŸ!HWwBJ tutR'DW( ]\E*t% tut%'%sZՓKJSt]EMm^1*c=xW!複Z,H!1&!A4 mqwQ*sr spy2j+Dٳq]=])a4c V&wl Q|t%e6% s2pE2WVھԃtute 4%H,u2tpMm+Di;c++5I-SЕd +-]NWt˩D1c^` {hS{;@`;g%]сvz$,!{X%CWWT (5J;2؄ s ]!\S+DHB9HWHW\wq_r$d. wA͗8󬜬I:jf,km;j`fzzɪWMsBgY3NIr=}gϲgc,d;F"od {lp'-}=xs7gY&n(F񎻃ޛjL)2/H70ke)Aђ;tݠ(#﫾ǚje80=芎tCwDWL-.ۥЕM (Gzt,p+wVܡӕLHWOȱK+v.Е僧+E‘"]B|{Y3NwDM^Gcw*Dd03YvdA<l{^ћѻ= ^{w֬#|xz,\ Kqr6Cwr2ڣ{N.Y^;1+^ZC+E"]E1-I]0{^ ])ܸuhM#]=EbJ-i;ES RJӕϧHWi>Ri9KY ])Zwt(m#]}JX +d1tRJѺ_4(8$kIW!}K_5`G11߮r~q*nCU/v?wAm%Whw=?7GidmP𨻤3u 2Slz}?ź\.FWo?bKesxm'7+ʻ"}ۣ_nZv{֋[tʯr=y/|K$;ǿ7cAB;GR>L}nnbvyr}K۫mkVGq|~˷Gj'D!7G CyvFLda8)0d7o]o훛$wcFJIޕ[@,} BɌaEhƄR@A-V_M4jDn/fۿ׷7Z6dM8cliu!øl)C1481Ȫ@! ?Yz=pޜ`5eѵ\sϙRRmm3(1F%;cpu~p6ɜ|X SZNSY2reߡgs|QMB,kb`5H3Tg-"uZc$$XIt& c$uK;T=ךI!0΂"gD3 6T r 6P( _ kTAM*l*c(Q@HseԸIp57 NRX6[gAPS_`HQظXPLRW 2bmVl J*Ho{m12t)跲ϑɢ4:)3j! Peń(շAvIgFȐYb,<-d3%rHzt;X3f''SǘAQէΦC:Y #NHȿ2>`EÀv&^8u<ɯ1Ubzd>uB I`-d3x:p4!A[БMg%P!jmH&Zm*d`V`y$E  .b ) >@(&5yT^WeL^#`ҥi1Ft`™}eHVHnjǣVH܁m JrC;ϫȻ`w @vV/#H b!ҧa)t{NP'k79je V\A+tm2=r.)LA]>ATXBe/Td`|GQh_Ii0eiU"ۚ8g8Z]fcT;6E@1 ʭ@N*$Y5c(#VF/(>qF yIV"B;4XTЙ$ fZT%֔!6"xw؛Gcp 7g dV&R0>qV%flc'jjkZ2Z ?~X3,Ɇ=9 j3YT[L4&BZ&7퐄PྠG÷*0ZoAK@4X8Py[t>??/2rnTg> B=0x@hf ՞aj0=>g;H'G~K@ɟuW"@ EP8c$$_QjFR##LW=:H9-йgQSiccB8tmL*2? dǚAnMJu]t;T;PR#Jf;ݳ,sP2pNfZ Rw!*C6(=< @Pp5Z3 0hSyP/FWfׅiq0pdgF ֓'aPJ↑t O4 a4X"!R \ULJRH0IJRQa,AM*#)Est\H !B`|`DcTq[ˠw9fp6d_3=)rN¦U^9}w>A }@B> }@B> }@B> }@B> }@B> H >~+e> Y&%: xk`B> }@B> }@B> }@B> }@B> }@B> }> ͔}iUE|@-SJ.t> F> }@B> }@B> }@B> }@B> }@B> X}@F2Re7> @+h}@E1aV}@B> }@B> }@B> }@B> }@B> }@Bt,>> ~eRRMKL_?)2~ , ۃۖ3C?ض\*l[ -{mi7 ]'lKbے@ҾUO cu*hOW%3HWGHW=] \#BW]H,/)!GtURw ]L ZANW$HWHWRrVI=p~+ʫ\;hzZqY|5u*&D>25 fOs :GVV t(ՉŸ ;{r+]\uAIs7~{2;h yT`: ~nr Mv6<-f0ʞTq}6ޡA L­U긔^vd~3 WO jrQW]'TP9;$Xh"UymcV»B~& ĕGsz \j Z. JQ7n0X#TUK!-tUP6D:RTi-{DWNԟYWWA;OWYc+M$2֌ \BW]b0xteO7tU޼/he_1ҕ-V)Lџ|W&,h:] ϬzyB_ j V~3Zaj'Vnѕ܁$վUO91Zh\o}ص%*h:]"]!]1.=+I BW2ub,tutˢֶGtUk*pwZK:  ]#]K3 bp)=[%;[I&+Knm P8qnm2 ?;=(]'/qf( Koay̻5_F7t;'N8KS9Z^TH=TaHBd@ q~QWaWl+> _Y]WL*ι.|9E5k/f_fuu%ۨ% &V NbeU09l$bG\lUo"9ˉK$W2Hl$a$w<\r#UT_誠x*(%Γ#]AK]`CWd}Vw^]!]!]]3*pEo誠UB((5;F2Rk'GIS.MgAd骠 *"z55WfUtUPv͟tRϬzuBsW;~ ~}vZs%wCٵ+])}*D7tpvC{(n(9G:BbP#zDW*p- ]ڃ}n71fDP!1'e [4[+!,A VG>xNp } -5] e>mtt+ee5%Z9Z(偷V[Mi"`g9ݸ}ïFυ[|x>@=.ݕ߇QQ~,OW(Sx9si-YYwwW/hsev[4զ9PQ!-cwwOL`$۰:O6?hͦu^6g뜦պ;RPm+nƕդw6_ϻ<#: Ʌp޲Up՜}"'.gUGsw5Ip?y4gnrzꨬYnExB>f5A,uI1ɓJNl:-]}TetsoxC2h ~FҶPѷp&nPf)+x\¿UmM'/z\N8^b8i:_9e_:ZS_J4\|Oɼ\h17-fogI찹ip!J=W'AVsP7i1 %K 1mrm 9.YS1!CA}Ahk1}fyqE]0yI$QjNX\gI=pϭ &ͧ v:l_=rm5PB>!:2DEp&m1A4QKZy[6T=Gw+vme\Eͣh~7xc?K 0,c> 2 Y p\2aLe".œksqe24^OߐfAA}%$ !JC^;EM X?#wVEBJJ];E:DŽyH-0Jph?tIכ捦[={R0RO"DSq2?+`:T]`r :8bLX3+c]Lj2Q9 :ԧBYw] {L1?w|osum\5ǬWy%02iNT$%d6Y(Y`’e"!( 5&2jKrbTV]޹UPD߂6*b"AlT$ULy[vyZ}MHHpcQc6@A5,^K^7fс׆ʘ_rTv*LF!h/~jAyUV{QdRrvQGcVң*:|F:7ߡ|֍|BfMpI*++YLՂRM1rM=q̳|Qu$Pb@ڔ ($F39R܁JS3ղ`l 3.M _nYW)̽&{vL`{XDl$5KB$ Еi&ZG&tN QUBH IfwIo19hH  v ^kGvmy,Mʏ&nVL/}5M=xR

2ʼno98ĝvM g{.2wc䪤y%f6 1 U9L6mڂ4c$6B̮vk+-Q>jR԰8db-46D9@jˠX(JL!ۍTA()D!14pML$(eP `2Br :.8hׄ}ᯅLP4eO}Є:B ^H*#ZC QzJ:JS`>TRTdBH ͝"ՑZfZCW||1^"4:IpxG 1g?C GsDUQZwtNQА*?LaJ8,źq2`o';PEFC*]eqc*h%{gqHd3;/hl@E2\>Y{Lk̶4Upx{R?r̰:iš >5_.ȹ:g u,q ëτW9UyX4 (-X0`/Z^rÜ9Q [cI"!t]07ά}|0Shd^2egVCVcart] }@qJ['O`g^Onz_ g3A!RSJ@@kW6,x*IC,g\;ҿy;t( 2?ʂ‚gK"PtV$@92(% EAO~"AKE]K@wG.&{ :[ΦZMJ|nfӋNiۇz_n X$UgvZPeJbE,G28CV9ịUh4BkM,pglrD` KF%[ԶskqWh^,u,~l)Jgf,I BWaz󬄘Iz[pOo $bb6H_xU| {U6y=|L>",E>QKN0)lG.aO㨥\G%[)EA@(Kd688 >+r*-Wo^Uet m0lg8j5-3<_EX٢x*lb^ۯ O֬zkmWPiV:wpR S {30,՞ʠZ.ʅof$ݘݎf9o@. q=k3`"BI^j1gJ.rND˕:CבdѧÒ%46+& 4q4( j7֣KWmnj=T^OnQ꘽޽LDn%m|$m{שK592J|W sh]?ۻNtfeзWZвIݽm{E;z^j}?$[Us6Ŝw;%3ZaϪ{:L- !f]x޳5M7oK__yjD,af֙ɔ`gӯ+"hu鱚:x]B`9#*Dk)BfMpz,9e=;lyzD)iKzlL췧l8{":gk!b@FQҀHS18#QOM@u+¥ĠbBL1jve4w k ǙR5mQhU4=z.WY$(Xh6vM~<햘`0:5 y;%s= j 1zm ԫTE5ZB)TmFw_Y8a " HitL*""I 3âҌ;$8"` (O ({(?*Ea(o`2bV̺>RD81Px WCwAjXuF1PغĔwYv躣驎]Ȧ|EB5^ x`1`0(8Ma`eR<+B FAaiD614$' -i!K"lxY.6ڸi0]Y#gu*=X Ϳ55/& Ƀ9~Cvae*;Nj5\A)'kYgvA.0$|&ɱ`@2 %>F'RPwb&pe')0y[  n.:mqkB*åCRM*Fଛoa*yvPNyd@' :ջ ٵ)HѢxmߖK.26TXpPv<ZukSKD)9Om>Hg F̩j͓i ~-LTO̦obzN̕A\[kdɆ/$ćoawB #qGb|HWMÐaofy|a8*`Ztz'1W %G%h$Fm$C#i`K]&!,>٨o씪yﻹ *c˛?q)/mů'B+B2 | h4*=U]T.URoG9|0_@Hw?ޜߝ&o>{_L꤉Mo!@# Ϸ>4m CSv:jw60?VqN+_E> \4%EMy7L7P\צGhA:~5_D++\0s,kkHE>גܥY4a+H6ڡ,Z7Cxf}O/ٰ8ܪpK˟:/VE;9&/uNV1T)Y@  Z{jTknl#{"r H6"W6d=>_a6v| ߇$V \99wr8jt3\J(׃_ [Oh(Ri#G$gkCLrӶWf;orc58@L`\R)@'j@+b%芖/"ZbX: L!Dު3SWqVJǔ%w[ǠoZW uחcq ocK) yx~៰.0ਓ"H1pdrE4P+|JD$A$B6C'A0HX-HȅD<8- 9<@ED"jn"n`,,Rg2T#PRDJ8Ed;Ja8Ҳ`hz{LɧWz3$.vK m[&5i&OO,xPQ2ta1$)R fSjJ u3O^{>EO'y6ʏ1acF ܽXe@!(HkP@DR`)tɓq:tGpYrx`% D" EW: WB)&PpQEGD;o0f3}Тgwm#Amͮo'2;-6xb.AU+J, 'JI- ol *i sAΐ%ZcdA$18>Z{Oni;0_.t'W I/|)}tߓ^4n:ߖ)&&,u 1}dգ?n\_|= Z/RdE8]e$&MPUCRpLpL~LN~%b+:KM$͋)j1(/:MK[D~wϚ<w~Lc3y4(S\W |6EOS80 z,X)ںiJl֩&*^ghXJ.3hwcުtNm&EI}]_ Z&ϕ/rΣ%J'.xh>S@2E{'Yel-EIuy=`?a\$` 2堾ɋ꛺]]ke-e¤"&& <%ٻ޶$W]7!d7Y,&3X_e'ٓx߷dٱdYlrLHbb?ͮzStFXp\ ouhBfs%h ژsCm¼ @ℳ%2O9<11?E?<u:ܨ=䠮4o999#uW]~1w9jz=?Zgi3j;^^T$ሬSr#WzVM>w,|уf̢~ꜻK1-6}SB/կ;7o:#dQ6fD9ث괧bz39:tj9i6lǙA9Ow'&(AsebI(Pz0b7]Jܯ+qPFPZ8|)¹ nd=]m;Ct+ ]ew2Z#NW++&((]!`ݱ2\IBWV2v(=]K ^>tp3h4JtJ2h~jV"9VU(x~teI{-6lTlddS0%ِ ِjv!47^ o*=K8qa<,|k堨=j ᛜNfn!YhՅ2 ݵ.ͻw}r`/o\Nk݂Y'ay9X*0i 6QWK-h 2ޗ|ri>gOj#z cٮJaƥ SAB)Xe$:xɂ BMV:4Q)ޫo8Q^A5xƷy>8Ns>\0_w'ӕCuْ8UaPCRΰ\|JEMԃD؜6 !1͈Rp BFy#qC 4c:@3CPp2IaꈣQ1xΒEs18+^Oї*gV]SK6%2G+xjG=-ڦL;zNl[ W7+[_Rro髺ꪟX E93O\ahi{Q!}5F9CPm>'J(B]l\YRi%ږN-qlcf4S j5LbRfaQ HD<CRx/Trsd)2*Q?4Rg-ךRO"&Θ &n8c(,maƤ- =k:I5KSH"|6}Z-]HU툈 u_@hַOB*) )kB$^2t(D9 ?P'چ18a^31mQ6̈gĞIq@SzrM( f@W8u^j E6,8QF4"j"<(`) k2Jxޣ'@q(\8aFl sġvȋr\mu6&%yE! Lʁ)dH* )p1rEH !XIz^/-p9>4(l37ӫxQ͑F; ]mĬVQT$ُ/|c5tq6MtTyJ6-sO ȣARJ#S;)Q;ߑBzc{w 6DP9E FŴ O9 DQN m9x?Ś,3o@7hdqq,4_FLi)8lyEIۮ\]\m5/90y^)|6s[pzٟЀ]l ٫s ۫sEJײލ*d_%0Km.42*6Zgұcm6ķ-,E[XxC&]c&VVL$)@Ҁ&.&t$Ѭ҉( b@y!Mp"y.4JQqe$pP;m<8<G&yeh ôpW͂M[cl.xeMG8]d8xtruQ"oώV ˝'kt( ˤ 1f 1*yP ' Ps4}Rǎé_u`qܽi{ֱ2xddJ <(T [St6~V*ԈhQ"k|T}*[qk槎 ۥ楕u!@ R8ȫ1w[nu@_5HRg*E//"H?5=Դ"F]^Ҫ` R*Id6$H$jѲFP4M1DE.\3NSHFG(DehHg`H`y [ R8YBfG-nΡgy泬N/9.qaK~7zhe(!Wx 0o8v9.~cَy,fq|6Ž9T ^?p׃aU"@ʟq\iLAtpH5:9 C%_%9:{x8:BV$l -]jlF, GTG㊟ 淘ghx365Zl;U}5Ϸ}(~HnXu̳8p?rI90LUW |GŢX=BoQ)%Q}Ϩ$B<}݇oNN߼xJ>}oO?~xp\ǩp#v. ;鿽qjilo4װAӬ^^rdW׻B`vE(DOU٥`cUy}Us{kē|$ځ3;#ƊEF瑈`$ M<$N$,IZ*p#1'E.H"*mԇhI PtYәƫ˶$7&[%ܣ&dmWvZXP^s9cOynI۹ MD|;a:.!P& >f(<+&x] ^86N{ ZM2FFthSM 9ըI2%(bZ82ЗKnw~?`x$/WId{gI1)+2S]Uz8m0RS-yA)ıH1WOpà1rj䖫̫u_M@CM$FRVZ8L`8At0Hc@[9㑔2Oy3}2`i Rpeߦi Hxg+QH =aMTUE"JFc&=j9!|\[143;s 6&JfbBt2 ):Z @϶2vD26 e"}j@[SՇAq rN [ER5}ϊؕr.HX> {M&^ymmI 9Zrt=˃&̴))v*L*מCU:3;nVEK.v4#r!R)E+>Yz~SqvH]졶|mȒ{2( lx߇< \9>n˵ؿkn3\=?BN`ӖGP@kL}a36c9pV.ԖsB Ӎ[i ^R PpzW@HG츑O+)!}.MMޝ}OfƿF.|SD|x\ʅݎnvwR;QYƱf@\ DآBΊb|l]`0 H=ov ?*xJBl}rUNlZ: G*m0CQw_TO~.2&N_0]ӬLߙ܎R^ ~L3eoAevdb_s7f_@HwRq^x0?{MV0#)6gF\s6;˜l;adٷN l~\Q[lڈK6NFl[ CҚ^SvIqnz0/a^_59VD)  KrY[ĚϘ,སfYz*E###)*.5:C l1MAPn&x92J#"(㎠;E+\$JKFҰ]clw]XB !mx>NX8|_^- 5ƀV4 zl@`' 1 vg m]N;B5|MH"yu]ntF&dĶ۴ԟE^-'L!1~ tBpRs;TV&Rw+b%%RDK Suz)7;}i1QQL5 Uqn3j^wr؉G]Wʦy߿#g&!Qu $82B y4+|*<F؋FH\qZAGb"-"ഔ6( Ipda7Y7,Yydf%k3),-Nrgq[΂wDðqMH4Fvз G;GyiIƀUenuoS+lTnxwyJ9ßLX.3S˔"HMŰzDrb4HMN:A8UOy^ǘtacF 1;ʀB9P6B30Rh5O#'uz/Z*輰8y`% `D" EW: WB)&0pQEdž޶a;l5..B52[4Fԓ@QӖ]n'pSȆRJ< pq;{j\sN8P*:g@}SecTIM#Zp,# %qƟƟSuZV/ t pw`tFN\0 C'7$Gޕs@ ,>O?M0֞̓.,<]U9B{wR_blW{Ap/X$ .W2z5xj3ܕ8}Ap1G}'W=^בPOun4?$7 +F#cCJCwb0g՛NLD~XM'o1w@cDsΧ1OO~O:@lא ϛQy!Xzd-ylWco@=RqN )=2b8N$ $>X,s"ȩV nK`UEShhiQiȰ[K!C`CI-!$xq !QmGs+Ew9lVJHH!M6F@~I3~Jwyr=i+ RغDG 1_bJ? q2)\*O%>%Qɱǧ$*b~ZH}B rSQW@7+RtH]Kϑ`3z/F/D@ԕ~ҭzcg'`ɨD.V6]wWWJ[uQTuB J2v**Q+ȱDdzRN1-xnps3&@'?|6U/mSkyB\.Dy qάQ6;BV웏"+/ܸ[L]w.O-Lb.R 0/Rt5!Xh-yF:pP[nYiblT۲fWtMljeUhЛ|bO 82#H?::=:kaE>ג)E 05XZ]wAMs.`9OF^[:ܸxk. ]iw[;vS7v &uD9Ty3;˜׼cag.wZ%To̰kfsV P}w$ƨd36J~gS~VnNCVwmy(CX2`ҿO?*_%',%>(<LγmG0Ά)a&ޜH1bgA'_OO\_!;({P9Q=5L>.s:'KڔVz@RRk *tD|>I$?:ȼ!^Vٻ6r%Wc?엙>ǚ(G8-vK,ɲD[R8`ɪSH5bktbe{\ ct7\л6#Ꭓ&KʿI^V-ic[? uy][c.%B>)jvņWN&W)*R+^+,#'h&HX0a!:dt6'2s:b+Ðƾ6jhZ8s[y=9+J]٪'WxS Y+7TkQ8 Դjѹ\Wgu~ͥ(oǃ~%XQv)"G(:Z(k+ݑhx{{|&"z~9ʁPsL,՘2}~|v)"Qv@Dk.z.SVxWw>KssG#:#(EaswǕQ٪J,2Cd xI3MGӱw0UzԔi.狷7i0\9ә/5: ը[F٧4댆xi3D^r|P/m2'L Vsr?t]3o:OklYlw@L. Yi{Yzvc5䝢\Rjogй4Լ*F'(.{>)wIZy9,IJզܽEi_T, \%q䰀O-xZz=pu9#*W\qWIZN@JZvJh>#  @\Ap%Ƙ:\pz l*G}6pe\*IũURpJ#9+ A\FHIJَ]}7p|ga\r$z[?I\qiRqv4)ycz}c&?3D%=J*}.pJ$Oͳ[z=pŖ 9II> "~ [zl]>VEbV% ˽UNh{Ȳm8!n&,Ut ~>k-`K&x.\?%S⌸p$>MZܳs$e^%W)@`MUs$-?yJR k+AEcQe;c)c㒧S6d;?Lv~P ̭M;$nsv PH{AMHYv$o< Ao/.:焓!HƠ6W| p,#'18_jF('IʳHisz8IRR޲Wv8B3+Z=Q\v6\:\%)%j•T\ƒ \%i9uJRv5j))Lky6p*$$%m5•T?~/Qq;fwq@eAf<)C7vVtn].v6ʀΟn;0Swx+k(p1 ,*l6MgZΉf ! ז,\R/=ܑFԆ]t&pSTb_p< 0UAXf H+;uSү/SmU՜Vi7>M:WnXIQAVh}wvO; ]]?&H~WnO[рm],ZXJL!0"~ {~V%/J314nV_˧n|7+_iAk) ts񅭏\?; T7w>5^[Vΐ%]WCF?7aMfZn͚5٨$7^+kĎb{&[=0DmD2rH "/k9D1if n#w]ܲKT@ =2,Rg2DOk3E4^JJ)rJ7Dd%0lYQ#>K>UNwL {SP٥I*;h u,Vz3'_\JZ^soVy;rax[bpHu6MqM)k&\s֗ϻ}Z>鮮/Ϙ0'+_ @`yݼ~#5əیc*Fr%ZzQN@L|D6#46ANh7{5}lh߬6e>Iw7nb-d߾}$~xBvk-|S:xhUD9e|o`¿YoGӗk ŁJe/s*e^Dn4BAW yihPno1I4И~o.Uup{K+Qel>L;@weM!-Cu\zDhj1*RacB[NxcTftv9w t t? ,EP? K_6S^׍ tpM(!^j4q{ j|*8ʝ7iV!gh՞6#pHJ NVFmLeS.Ny-|Z)LenMf0XۯBcK%𝻫0L2'4;oC#{v}774 ߠfIokF[ >W8+jrMn̟|2{LjN Pb靃xқmvRLq~tPV>:ӻe./MÛ5[[w$'n9[vhT\3b 4`M,6xABQ-[ޥޚC̻sg=[]lu$XgFviA k*b0kQN9$BY)G EέRkGDcZ)bZј{808BK}B3Mk` d=i*oi٦_$/SKQkrD\|WRlv9 Ҁ`c|wn;AF j]1χ䣥GR Dicw7(?NBE f75cmY'|:eR|Jʨ:R,#\}&ө ^<1FuD^"֦8Xda$7GQ6*h@,Ok-.e p) C)pZ`Ru µf8z<8O\Fӓ!5E$}Ws)B:|}}.|8OT;->ݪe]gBA%AYO n[V::J"0q%t$(Ă%r{O%3`NKkxQt`(eZ30{-#cKҒKpV>AAIy|=CZyc9 "6`r+m9X%68ϑy2HJOX:j1D(jxT~8!ԀParoDGvyPR񀖆qaCS.?WdWˬWPwz% ZWosDtrZ^l9HI)M檽abs̊;XgE@N N)~z}|0C1G?i3}p?/>%ثix?~釧Q+!FZR EI@(Kd6&g.|%2{sR7A<(3Sƒ m0w8.z86 IA^ _2#{lݲ_'QUo>y󉚿*|3J@?eUI=R ޏ0#{v|5]:A.Oכ=a l-hDhb"IZ1[EiPV} uaU i)V% bjXRHc7AQd[m"<&hn<,yO:^o[wݫnHlIFҊd(UZ/T2I%m4s(]?Ϋқ[*]la,+9^}PkeeWJn^xC}M_>wwq=ʷT ݣp$ I!Ksy]O4MC,xcqG{W;ϰ]YN 6G!d"gqA-!Y-zGK(E#FᶅLJ_Y`޶=ڸѠ»HitL*"x 3âҌ;$8"` (O ,({,?V =,Bu%+\Ϭ,#Ht)Fe jHN:C5H]2Uc #1=|;6]HBHP oaQPSc4\I#=FE\GCm̐h%uL;$`nj"Cp. 4cH|Zĸaokag&׉`a׸7/5y'Ƀ9~܌Er`=@HD00Ľ IC]E^1x C*E-6`MHtH\u r-QO Y"ce12#tSZ6R%i{;i5ҹ~cA'v*M׵zJk)t R_;}KgUdZK/՛wWU`^siFq88;ƖخGxU⣳; 8Í-1˦fHc377X>0 F0bǣ|elCDU6:dSM} }->y$5amnjyS {R$Äv#OkN!rD}* JmlJEz*^@Hm <U^sXec/b*ם0.g-cЗ݆,'CCye3(>,5 r)|Wsb45f&׹m!Z?@t3݁My/%:fm7CAͰ$ XP+ଚ0fbvMHwTa% Y_&mR+R3E9AZaLv8Qk W:em^vͶCZ>”!ݿ? +r3q Ӝ G%7"JQXX[eXOXٌr;hڤƞY1aqOYb DF)#}F8븴K2*%U?9ʍ"VFudXDA掠@+\$JCFҲKkl? /C]񼀕9!057;BЇ0Me1;]D@/uae(U0X46vΥjh 2gt35s",k@}YWs,dHPFieNO~"AKEKxd!R )4LVL ` Xyږ2&"2BlŦָ|2g\ ӟL5]v6b~A*E9kl mŒ*ڊ3N;kSQ)`tGH@#{ !Iw8PC)8F1Zle4wFm6*GT K jC{=moS伹xj۰)ozJЬWt i\_Vǒ#9?=ݷGH"΍06$:=#zfhdž9,p U & Rԫw+b%%RDK CXCtL1؆,u `JJ"$)MS"Y %0l\N]0$Iߜ5MuHޝ [_[);<4u:5n:OgYQ˔"HMŰzDrb4HM{:`A8O;g<ݛ|N|kcF8p2Pq;b s lf Y)BI IV$O Ӻ?wqAG%ǫzV2N$Pi]Hzz%b G*U9ql# . BL_2]h6CCۈz(j[B4 %ԆO \zXȋo\WP9Lc1 RSm-D3mRI-YN?^CB9d@)m*JhBy؍ g1؁w J膼EJfE?||E?7Oo<31K,<%8 2?'_ϳdRe_eyprnp0CwcS*2+gEB\9;rԤhsOpSO~tOz~ؚΊlB* b`J$w0d$U|@`M_wd gRMWZ8AjR[w=Iuj4f}o ժJCQλex-'T<0In4`40@u -A5-**j:4ue^;)VDDVg.ٺq\3f=hFZJ`#]ioG+} #}0^9K OE2$e[Yg!ir(4]}TuUuSrLXfdo=8Gv}{7n:vjNl.8utx ^H-$ \(Jg >^Plgh^#B/@J*bɝw%hP/+rAد['PTxPƒ( m0$w8*u[_`Be mqy fy$h<.@.X<.8̰C=.#I\%DV"ZB衋D%kKWS"y:qRBE\CW@bW/Q\ 6NsMFj)Kwu][rŅ a\MRe 17C{W 'j.VsJݸ-՞LwA36q]ϟ(gs]wfkԅs&%cwO/Q&}%=y+nK D:7+:ÎqfL]6+)".\̫lt)<|KFE܎̕>ǷOLrJ͙"<]4;˜if mgv3['iYGH(򹖜N)bxЄ$SInbY10 f1gra9+~a\!zL\"Xʣ1\`c1!5^!FDJ;%rZCWJN[qŕ]9?qXP+}$*IŕB՝) XW\E\%jD%oO^^s诈Lb쳁Ҳ') c(ԗd={wFN:K;w,r7_XW$n~(dnޝ<W(}IN9,q $\2rŜ3L$.O`8l^|(4,m[36xN7q?r0~@T?XEIUT<[Lᝏf,V!KTO]'~q7y"Mޏ1#hUQ9IŤ׏-g@cn/~ _NN6bsHMHH%4 r-;h޵Q stbP0vY4Ұ /i/RgRC+5X¶k_Ýe]:)5f(钠>_^6rg|(NlkBꦾz7^ X}(uuۏh0?lʉ6].JAU1 *MnzԷG-]HӯWj_jYkˮU(]nwxdJfjZϤi :k xWB@.x3<6o ݟ6f׭5&`a-[K,5MOc].Pi@WȜzdejQH b99SߪA{OHUBj-k*$.rA c!R4hV6y O$hIT`۟&MA!qܫF͡u#B|ne6}OKX+8`8Ŋj!YC8ep,p$A`@ 6NcF1Zle'h2:ڀmT dSEL99x ǚwG_G?⌊56EaT+Zy,Kj_I8&Q ј3Hn%9KJ[˱rF2c[0}`j(X1-匛AbFamT!D!e!x0k1s$"﵌&RHVHKDzgclh9Vx7낮mgɸܯ?H(x&T8='_^~4y"wԙj]Su o^u0.QD$mw7a&)BNި)\L:^ΡtT:/K7ZSjK۱GJP2VծG5 As%Wd7yک3&k,~=T\.3oOz1[9z?Yi_/evvz_Z)lYvy˭͟6'D̡6eO+"X?]-u.TrFTȕ*B *?[c9SlnI~smñcީԜTg+ZDLq6D(JUI#q7FyD8)iԴF:`bBL1jveZvf8zyCg~Ms=?Ϧ_c>p%H SXYj<QVdX!2`aOJ Nhnmt4x{:MUN[lB\:$UXFb)ƭ.Ti޸DcS߀MV"L&gSD1m'?URl|;8ctmT kiq*Ep ]tm}0Q%ڪ-*œk ~,\To/܍bD$ |bsu]-ճsd~qpo:{i3\[k6#x}3=0~!`(]Oz4ksPUV:d]u} g}%}HjgXލB֏Yggһ2)[Qi<DE~~vF/zRy&KNm(- ƨ?&uUUyL+*`Vv+_g3& /޾>~/1Q| y[H|g~݅njys<˽s)vE.7 ."Oru)1$NGE";o&V)=1H6a䐤2iㅑJmc}J! E N&xnJļw w l"1 6jN 0FF3%`8u0HcÀ%Rs -XGl1ѧ[L#AV ιtzJ Na2|}DC}R 9I_UG((d4&jci ўʭPtJ3cN+9 mc2@o4BaNg[; @5o8Gm[mM˝A;܁<&wFj wn2voyt8~ɭzTcdt& w+b%%(Zb: L!Dި?EpWM鐀v?[ T(3LʭV7vaǕҢǒs?!-.0S@`\<JA]+|TV"E"\-V0Ղ;\HăRڠSȃRpHDM͂c"{&CL)I45%KJR,N,xGI4 tY/zC$|sVȕ4ywTBv@x%б}OSÎz|'<(ĨCjjrT 6_HQN@meO^dl0l7-.] yHG,zI{83e[eD((3琇!tA%O&7^|,By6ǹ66]/2*ոw6>̵-YQ`UO胟l φ 6-7&MsGjF|f}>{G8MOa_aэjjGS-=˗坱rJOfjWĜV?syUͺ*- ZSڇ?\W%Ղ<)PH$ȸ!ei.ů2!xǧ_=?=^h|l?٦{J^9V>_UhnQ6:9L.v_ aM-v\F*5`WOG,cHW77ŎtVgۅj֌ cBi?]Q-!?F޻k8jk];9A(rt?~ח[SZ)@}M1,xg?]]=b&r{MgkhI\痶c7^c<0Ck#\逮(=7zL˦C?)SI~aI/~!Cw9zeIޤɹgyd^j>]B.9nv^Nz,m<7Uo_Kzu'SW‚ &?LVBqU]^+W{-Mw/z3޲Mh1FlB7ȩvp1}-ԕH[@kR\0fx 3\nҕ3}wf-w΢/f#JL\'7ZnU}v=^.H+)mnY#3UV&'> f+u`/N:ޱP%ʊ( n+(tPBd)J[yYÙJ2HI[2[^!;4[ol4{}f7Ot) w^pRVĸ-x&! M%es!nܘf:)hQUv"pyrЊKNZn4aSq+u ?ChOGqlwojAe+δ g%[r2E8Z֬%g͐}͜F_qpSJFDᇈ(k-mOiFxҟ9ߙLGq ovbe^ԓVtt6~bGz iqݞiOrf[v!X ?_@ǻ 6HA՞$%2W vrOy&Ja3I,Z`ٚTݐxr>/]6iU/MMlzsVJwFqM͎k(?e$ >.7`\(]$LC3(#n  @~I^xwy5[{%<]v?L3.9SaG돣MLjej=gV0h4k+G.S(0UOV^R+Dih#az'>$PI5RrZ4N޸~oRrV'\]|Ԍ|>|O)qRȺ3AY4 xB},%XbZx0wO+V/yto NWF4KYg 7%f| [o#~E6fv!6 Ŵl}Y@G(tTZbk/f<&ZtvdR\P^:?!Nj QAgiނ>rKhgRzY|{X]e\G(էqPMN[S>z-aE w\me CBIc?6\vD!HP9 8>B&Q01&HyUa)fCyrβK⒳7PyB7~`"h<` [1K0D4v,pysz.VpA_춂NtPʭ`LqNVgS()d7%F$Cӿ"/׍3  :2:}pp$܍czf?1Ls#U|b~ E b`0]jv  T~evbeﱣ = -e'`ZD-T P"N;b>)F۠h&&^6Fۚ3SB]օldzG8~-)ܔqcZ7_g%JilN[p谉bltȿIh #-7BU,n˺ښpuvə7Tڋ762pӻ}L\noptt@-!ZʳIfB:cj!}4JHadԜ}x^ (%r=hYTR$\b KHBPj 0jKRdT.R7@EBMG-~ ZP B2 DJ9o|abW̜5f^iE /|ny ʐq qߟwŭ+=;z&vb[_vTvZ;?ZV OYm k CRYѷыߡ^l76z*4sHo%90A&FjzN2'gFQ#tDdb =罥BD:QYҞN*fΚwB.M nXKh J? < z^)$ <P&+D&tld'(*O1r$Ɇ(Y c@ paKZJsƳQQ.ȋ]ɸ*O6-\Fwp`2VAIW!y]t){rsxֵW,>R3[,1 c08\Aم)\ɕx(`4 ko=)'YLSA!Y$N.II\S)KR9#c9R MPB­bܢlif+֭ۓͲƗ:.oOx#* D  ,KA C)Nr<8Ɲ O!{_ce*y%@3 :cba*HЦH)sGl?B+]w j=jjK"D *al()`j<MX(}"FȲ}J; %HQ"3;FHbP1 8PƓD\a<,fx< Ǧ ##  5<@@߷ J\M# 2Bh %DiB%:MJSZqƃFD<0fBsY0"3g#j8;ڥbdS\4{\S"H ]^AI4H@>F rRb1`$6pqWX3>[|'CX~xϛ1 E겮~|GސćrcCHiuS 7?fc_8h<9hUD%'on4aS?pt8ݺo,(>\I}߽q֪-YYJ Ec"-˿R?k5s'5}m~. 4~@!?DC]xnv^O<\}m;045xԒ{փ-4Yub-͐D٦zazazbj\hZZ rR$.jsEɆjbc v ʓĚ90,#I|-2*m(H0f j2duwe'/Oϗs~nrnI>ϕfqM|^ dfs bzut^Wz]Ezz)NZ kU5l`5l`J'W/կSطdfѕEћh$wQ)G´^z?_~~r~*9/cE9pwZ!Ƶ Qv,Q~\WgtdZ|MYCYoN&_ݾ#u9mnp\[)U ZL.;"_OvyA;ׯb D2 zPm *bH9Gk77TsKPqC% hGps7pUuz_uWJzpe{W`Z%lZW/{Wls,uW\iZzp{W`ܟ5j}+jWi]}z_ Yak𫚳~zkpRͧM(mbj@M#>K'Ppsq`Z?Ols@$]*h-+S Ak2ʀh`_Ц)*U6)[l_uq: W7I.£˓ћ0;a:=[H#>;ʹ"Q6[^mמlamq^ڦGt=t/{S>.d~BjRrt($mtmxt2`ehߓn؟m;ٲyPW֧6lIiH)|U}V!cDB(B&iB)CXPTǢ;)"lLڠ2ɐTJtD d+%2Ie#9/ yA]W̜- д7FVHݧj851a~E.RjBSM݈P2(=Ԇy0 z.ݩz' =^ԆqH|,\t@)\D*rd0-GS(!ٜXJC[ 2Y" P8̜-9E '.?FһC6˪`uw?Ns6={u.zR"ZOH&VqP9z`n+i>k7\.$)=@ʚGtQȔ!3svU5UaET'gޱS7)<,Tf{/R3֛eX-wJ9w]>r.4#$W8lywY: Jb L3&Z0&;-@\ dݥGGT4 D5/{"l(*)"%+#JDR5RJVBz2L"![ (-E $M RAKpHEYk٢r-@QD':Yd\tQ9آ=qI*ANKg1n|hpyڣ8l0h7;q7eUdn("wx'|;eLEg ub "ԅd(1xTZU㽐ݧ5  ľfRº tdHѰW0H:YLʲHA1[XG/QZRum j堒LC`|F#PtN^|DTWbI(UwG)2Zt('*pxB&,dAJ0Hu02vt]:B5`b ˫oHfmgbǻڄW6b-AyXFe$iOEF  _rjvdu:ʜ-qGI\/c|uKu\,7Ś5&}· D~!z71CUe_zh)oّVa ( ޒX=QX=xeڒD$\$aE9"\pH')J)BIwU[V[*G9\|Ǻ8/*.0 FЇvv}4:-gJiGG3‹CɎUZioZ\J'5:jݸ٣ujl5o.Oެfwc1LNVg692:xUA~;gA -[swKQfs3'vZg ֧XGAϮ/9vcJ&7^=,~^HmpՙѴ*l9ΧHEV@^V뷣>~sFߎgsEeP;\d>#0#Ո_3^k:gDkտ?{nrOe~ǣ4>?a'woO?w?KoqO^.s` <ڄ=/|?}u4lZ#lѴ|v7z!Qv)m]pds|.©Vn{h8܃{̑q8 9etB+J"A*j͕,Hu3$Yܓ KpiZ>N5~"sFU*NDg#Ya.{mD& |5Ntf/ qPmו?M׌;ª!mDi;,K ;J䩲B!5 6@k94R4$t=JC"φ>X9!8&w1p!63-Q{4% UD4T'Q<" 6@68e") \Ud5ЄLJvuf|:O=Mp/|R PTAwem$IuyE1fۘa)qEl *%HRDVeeE^_DƑҵ$0FF3cy@ (3zAr<+%Oy3}2`i RpeߦifAJ!#ȇgKꢪJq$\@Þ簊&FdJߪ"\B6xјI&rנG;$^WjטTsZq͹n@b%31!L sDi-g5vDk#E~8m[&ZʠFD@N{K[]rIZ &Rjvy ^[GH(򹖜. 3MH(˕z"㕛<3Zël)6j:gL">W4Δ$r!P<.Q'l~"r DI8_luGm_0td /B'B?8|<˲ >.󫆾ON43Ѿ̤h ,jߙp6j[1.y ߵk6k 5ʁڐnyf˸v+m A*!O`]^akR*IQN)93ZdZ:&O.] jPF~5kt?ͶAZO ioZaaܴ_B?gby sR 5(o+L7k5Qv{u ,ZٲC*&T0)FF)2=S&e4YǥU\jduDQ0fxQmGs+:2," sGP FR.k%)$lR93Ik`fկN/3ڀF4 |lA%gC`(Ҹ.h7o:ȫ`$ :ilrK@L+<: UrcrT@G䨀"",'#H)XȐ¡D:/5DDՋ #z!5#B^Lbb1h#1bmli-ַ~cas[]46o6ZCs8,Syzny$Ugv&ERx3,G28CN0PCTYE 6;6`#*^5*~Eugܭ+x)z*ɽ]_/:ce2j,HԊuHk̯%L$ 46[ =}NE.^}+"tE .ce #0v#vVR;>cߍJIڐ0uQP&[2l:+69r&"_^*PÅ/` Ъ,lyX Qv[^HÂ7TnڮCyx8-ϫ;j|7+A3Q3(9 {}VQ<2(X1I,٘}RW߶R qՏ2_V$|SˌÞjנ"B%,%J9SVrs!:4F(T%#y- R"*l$VLz1AhF9&hQToq AHRV2LwZ bZF tcAFs+%f16rTtUi3/Wmc w-twQ[^f.L|nڅTS"ג6Hi* ѼgܶTbIWUCJO;; NlHQԲ{JP2fծ{5O oVytU]y9h{;).Pqj*h5֓9n˫2|CηFxUc ֨T%-esz0 s+?|&)(RE9qpo(I.2%GѷWI}Vw`د_Ԟ4'{0Ii kL*Џzǽ"VR+,!^"EuZ['rC 1ه N)?NE1^ ,&TJqw*{`'\ W^nK]ФQ`DIp3c P ps\)QT4a/!j7a O$LHYEB"B@C0DD\.,Rg2tK0))$-N┷(a窍͌c=&@3oVҐɻ-4T %6WSԕ 4s=<5f<(Ĩ2da1l8)R f n5<ύcL{ULᘝXe!0HkP@DR`)4Γq4/b4,xĜWU+Ha'i.r$Q`DJ1a܀#*ʈ8WƩV틾XbŚDԃ@Qݒ]n&pl=5:xZUU왳q9J='i8Q@jJ8Mh pmmG7ll1k[&Ve }`yҌ%WnpK6NѰn)^Z3BN=_~8|d":|vqS)IΘV9Q+$X-H@§ΰ>ލiaxM'YqW`+n}y,d?9S %)f8tN:/OV{8.yƅ7ܥ>#4!Q{x}2~ٯ?SITVl Ser .T<^ҴqVWeeG4NW 2aLݛ3ONH3!Lυ/D#{߁7ݏmCYklURUU_ΉuO(bVT2~5~r Bea׃]X73Z7+66}UM}U}9IOb7d wz@> W Lq~N}:wh_ *k.nVָ# ޔxvD-"m oWvxD*Ku4>S왦#XK疎'HFItƮ%+:h+_-|&AgP iRy@%mo/4s^ǔB Qa6#46ANCsTsX 0W)dG2Ew|s(vNYo.{o`E 0l`Ɨ2mOe,95:<y,![nu '1beP#u\?6fޓe+Cq*L}]flݴ *t!K[frji9˲ .Zcrl>u;ݻ #D3aizWI˘m} 2|~=F?ɔ1)Wu,boA= ΠpVS T{\i$5J=e5Mؾ?#2*ӂzTxk!`2rH8%/kUR΁Rn&x92JYqGk))$nu`mlV83dS:_0;S߃پ%M~ô.}@̧!n;;?$a7r99`~|n7j/;QfR!2(PRJYș Du^W .7v_>L޲Z|˖'z{SlyoG4+F"2R .$Ŝ zl̾wGiT~r?_xD'Į`ɰD.ѧ®UR]@vE)X [4Þ L VvϠf .vB';4>~ќIyJz'aW'#0n"cJEy#=Am9,)ӳ zbv0rF <(z5C%'î5DDP z;!v>~]*+N]%j%9vvTaW/]Qı:%v.O]%r܉]cgWJv CO0S,p#JȎWyݵkYH먨|E>hh G,L _ݍ$@EgfGUz)L)N=;޻&9]ҙW E>ݚ@yq߽@Jݞ]!xww,ܲەN;OnQ5˕aOaƸ֏W1wh}A{ߜ\"w>S@_L'g7#T÷z% LִwiJF9(EwmWOJHtVTF\U)'Lmnؽi7j>ק+sX4xg>֥/6{Gb:0JHڹ Rڎ(9ٜPE6%[>Xb*fu@d urWj9в'C'Lf 0kD3$ƖjhjzL-x~h`vən}oo3M- m*F)]hm61e&dK*0&?!Kh*èiG=U7*)$(g@xhJ} ?7YsզEKG:SAd*|,ťſ!IՅܽOΛ4Ϫ|Ni>DTRnm5ljxIl13 *J] Ţˋu8ǖژ[Ck+.tlNc[pf4giF}/+**/( )-ٰQC *TD@5Rhٻޠ\}Ms` Ò` fWH;i\2gR䍉TeBGklKZ, |FJH]鬇-P*ӫl51n2,Xm#d*zSJ4/9fH57]Ґ?ȁ;1m6R0K]b |E' sG!#4eESY'8TR 3r52D5U r 6lC@WJokLEwR )tiiv!^B!E{֩NRl eByсmu Rq*RYFU]%׽6u3 2+=W>XԠD$J(k! e 1P =(aL! #A I<_nbA\_1.!9Yh>3(ԃ,0F!NH2>`d¢JrC#QhzBbޑ¶l:e:B4Dz= R DŽR/^8][ϯl(9=][u5_EVø]{ӷ_<2&2)M$S!MPEhc}w;+_o]ק̟['w{'?Sa3iM> ^ "w> V|@E/> H|@$> H|@$> H|@$> H|@$> H|@$> H|@бQv=> 9:{> ZtD> AH|@$> H|@$> H|@$> H|@$> H|@$> H|@t> P<\~nW> ":_ޓNf_[e>k6ߝs8C -M7(\_[xq߾?E|Yz9 :=}t'\//+xtY|;<;wJS?y ԤƩJܨ~*j:o 睤J6>m?=N| j . o*ŸjVhI3j+^DxR ybCĐ'<1!O ybCĐ'<1!O ybCĐ'<1!O ybCĐ'<1!O ybCĐ'<1!O ybCĐ'c5\vM<^Sc\VcސĐwsE.$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@:G9Wϛx)z]Z_N{ܭgWϰuXm)ؖjlK@ۖRlKGy5~EtjMttI ]#]Qr6;oVCW 7صQ&/tutSPAX] ]1\uh=tbBWGIW!%rtŀzZBWV|1(:BJ _[y.]z'VCW טQ:+t}Klt8Vez$uqhWv81]#芄:ZDWK%n:2*E ++WLa-th-:]:BڐY`j~+FҡLNʻ`nSwþgsvB!Y{986ߜ!P^ܚQ$9;:_?W}"޲7qŻY czy~گۄ)|Դ)RqnTGiz~oѰ[âe s?_U?* whdX.^^HRaUmJTc4zWwK5+o%SI{%" Ls4ݘ\Ui*T"}ΔFHFҟf.! 5YkrSmL\Ѫq.<Ꮉ@[n"+؍Woׅ~/qI&zdF(ټm.ίvomS34i3*R`뵹4*GN֮cR1pTrGYɑN&Z ]1\oBWk8 ]!]y++s[\'g:]Ux9&u(DY|ը+oCxB*8z]1\BW6)FE]#]{֭p\ϒtʯ'=_;+~n7]==0+/tԡVDWU\ ]1\BW<]1J 1ѕQ+uF6|n{.޵47R1eUو9lLte@dY]dc%[/˔-WL&d D~@u@Jߐ"BW\ފԾiTՏ 8yĸ!2w*Oǃ4aHu|>yh~@>LC>2$,'%9zV!Tߍg^iJWz5 vVx fZsut浭5h(ƫn8b7a./]6b@o%nEd Ţ_n]=-?+"cڋ2p׵?_c'Y^R9NfoyFae֤p7lFt.38KݲG w\ֿ{DT:͛x8%MFiƗ9KG`^录i<`..jȭHJ$u1K!*<Wp[Gc@=/qIDt5n-e"\8 #hmѵ4F!cA$׉Y+S}.jKI'+eÔXl~i2cgZ;|u9[o;[hDC5˄Ψ^^ez0\sGwQn:1UQ]_wj+WeJڭ)̀Q+3l"?}kϘnǓ]LZq$t^(ER#ɍQzd%-ʀ:9J ,'=q3EFjM/Mw^{]Ѱ9/6j"+F*mϢybyRۓ7ǜlC#4zC9jt㭠X,<^ukٜ>V\6s{ie^wHL@rRV8nP!@"0LB(t0Zy*mLj0 ]_Ė8k%-`r7pw2%iYbc~7|GԮ.iMݻi;GmoOϣϿ*n[9n!z!?I*6ƢcP HQgsѻq >[\5',]7Ulw CWzRVPG>$H4q.p :Zk hU6ฟ1/R`8|}A3 d-}@hNz]T`*K &L7"F)eF&utHʇhm=@1:ID9?i媴v6#g;44[gsQKZ'ASwmvM d[V3j0潸Znv<@WVc#x j<kⷢPb| >zJ4Mh|=vb8Y*3[ŒGa2^qeseGwRPhn!f8bKI Gc.b"G& V\,*%G/)N&&֊ cX8PP, n`[Z"Z/ z'6#g?N\צgz(Y)$FKj-qq`ߏu} o8Ƒ e$b̈/OYKFVBakT)BMDGD6x2dN>:΍Qq$.iHxMciFS¬/rNJ( }noU!1[$$Si2X 3)B6o d᙭0#_ |@.%8oyp.mւcq) Z {LpV9t尿a ~i{vX xQ}uorߟninu=?n{w̽cJxCϕ`kLs%׽eZR\\)yݧ?8Ϡ6>݈ g?H02h4ŎB^A,XX*0N @H8@)m;ўQ")Qi/},(M,4Pt`wX}*d$Z.֥$AFUJK -AP~ Kz#!7 -fRǐ-6_QTn> .ܸX=b_gN>TfL._.95k|06zÄ@[Dh\[mE,InM}rj%Nг9;'ay5't/v狾LꙊ6[l6vһ^fQdt L'i e487\#^Ƞ^d>{M(2I<%#ΡN-6d) 4vYsB$Q-HhC Rq&HYbc`׌BNYY|0jح/la2. DZd_#UyFlzűGGSIU9 Y yĐOk  0irc~̼TΝFy12ooPL:Q@!9-L 5O*HE֢ClkdezYuq~kQa2 ah-|ˮ95wG4YB}\DQ?`*nVq'va?q0]דkoy|lf6dI BIkBE9 (GW JQ )k{}M5q y=v8E I''GbBFW S  )֚95c;L6cu!4օׅ'ՅKئoٷ춭-MNyg X?w .{tȰ$$ ^쭏94$U1tt)(;UNi^@mRɪ6mZіz܅#|oJʅ'ZԊ;^p<݌;EkC{p8#,Y %y{- 8Ȧu&bIdk^HoL@66Y 1'-<Ie2IGх@j${ҹlCh_vF"*sшXhkDk^#<NH 0{U+ڑiB$K?ELi,1vM5wscI M$\+ZꙈ19&aUv)%-`ZeFlFy>u$8as:'_g3.9V/^/z%PGMhآS!6 sVX̉,cOE6#jG}xQD'Pa|}MN[#z6> yF>G?NяRB0ѬG srSx Xa@zc[%txxG}ή>:,s!BPӝV:XV$T`bh=S"?n> = W⃋"B'|D rG-LѨ@"^>GLnmeۂ^}U+U ܸ2 |dL*}bR}A/[Pр3Tda>`aP0I#":#-ze->Y:u_IwP -fL +ԻЮG?^l{JH Y9@d n&, i^p/@,:@"`%Sxd1Dx$kvңbQ1[GʀF%".XBM &Ťmtz}8"C-Ķiل8f?v3˷CJl^{JGKMA;LmJo6E֫sMQM֦ІS?*KۃHyؕD"`evxO)_c䨰N;crvNB;YvV'#զhjXD-2JpՄK2dVm_*S} -*xp7'?[tToJp;9|;f_0*gOHxSXQNhs$\5.M&k"d5/]/1X\ %] fdy/qw iVxyV7)`M1nWWrYͩF.7l`9JepN|djôh ͕r-r9\-Wf0,_Q8 S6 1G|Qţ|3ΉA%`a^{RY`RsP0C+c.,S_,!&i֒ aLGip"+dFSA"AHK-~< Q-%vºW1Hω`9qZՄ[`]kG#b"9G e7w2_GkjD#C*tTE5$1A\J(ǠH2fg iuT =,aHS̎&mvz(&1(o54H'lM $&AAZf КDZ:=w!_3],Y0:׾J~.5=\r1e=aQ@}~ũEOq"K{U8_Di )Le_P.F &>8''Zpt_@gih{սoHr0.KP@I5\B q4GP] |JѲߏdRN~J_sWHDmVK=$;qaU ׁ1J$>ysAL/(Rb8.rfՅ׫-`v 1t޶^H1~]^[60^ B֔$][t%:X>4-x4heSF-K]Tr]j/:,Xxճi*ƹ@8Wo})7˗* ,|-Gŏ?ӀH.%|%` Jm4 E^QMTQU#㼊v $<~oݯo{2s~ߠ_.R- Xh<%n?n^4u -FlP4UoP.WuVyMZ%DJ,7ŒB ; GK%<_b @m5}}tUFOCFbtt)$x"2:g1d+@&AdNS O'QZ$t6s/6,ȥWJ5qy9Fk̃H, YH͔NViL_ IQxu{:L)GjJ=L=|߃=tg: 嵎:*;UNv%?2Kz592a&'Cb eCSSHi#yL!A:6As:lJ`Ǵ2]#edzflƘEzai䘒1DJhKѺf~oa^nzu9x`ƠugQZ{0 :T&b)a \TgВ;>*H')͑2#L=:G2X \3YrNl$RI icҙEӣnP/UITHI5c:ߓ̵㪗%Չs6Ap(w~0|'1^ µKsiu'کnUi`&>W;}|zѶ2:9Ϙ<@@f,kFE#2s)$1*2%5!E FIr5;baĠag歒D`Ʉ`H@`A6DVL D\NCN3;bXZ3Z ȝG8NPNt[ao1|p}SSfaux@aת;*5>= l}f!)g;^U`(OaDK,$FldA$ꈧO.u{|RO<{1JR20o0Q"(+@rBa;PR<0. qܟ\(q9Z/0jtb ̴ڇ,fE F#W*J643 \szg>.~l3'Ys=[?_uQvdxB`xxX(;.:')+4)ߗ=kի /oGR_7bU(%jGG+SwR(ܪ ֭FOE+xHHc3duO-Fʈ9&vϋ«/PxO!D_<,RRUԡ`QOaDE_6c|a\enծ|!UJ*w"6[½۳Z@k}+~˷͏淖i"ixP#6M<#L԰h .^ةw9feAAԱҭJA_ VS*+x/`X#>ni<ķYTZ .{@<͎!BmŘNQi4OR3k8Pu.ŜK' HIQQk}@?|Ǵf\ %) l~&Qpؾ?QhaOv*> m (ELȰed 嬜|Yv,ze…Re 8MfBwV^?;y,/j<ܻ P eOK_3f1ypv2- Xtnq^ކ\&|P@w|u6m@{<{=\X|&~/fz׃^Kˌl]`.dk JB4e]!]8e-+9k ]!\!BWV5}+mҮ"Ѯ.m+DL Qr7CWzîDij;+nj;Cif -JwtЮ O=@k+,n ]!\BB4T]!]1,W-+Yk BV5愈T]*+ۋE?cCr6PAQRGimPtҕ26m!`-[CW׶f[rFNWRttt-3ܶ5YL4؎UZI%rBR>ҕV rD kt(jJQEI  \)BWV7~ϫD-"#]ij-+ڳEpuk+Dk(j0*ZE`+I[ *xB3,ϱ]]`ICWt(;n`<vpc lvWCivnAWv=5 v o ]!\+BWStBvtt8Ү+kJšNWiw;zc߶yr2B: JE;GRHҚ|RTLxM'e7}BF0&k%_o\9rſa!Fb؇<Ƌޑ-IS-Y֌ݶt_ Ɩ[-!Y)Kv,Tr*7Lj+9UuS;3]փ ~\8e=kǕdq<{3`=U͂+ԯ>Rwpu:D`1q\\;M1jiŠ.SUd]W*8s䦯zSڣpji7MJ7\ D`+egg42m7+8{ =gWx>gqLn-M%*1.\?߻!Cos&ӱPoI_xR^6\ \x N>N+aro:A\Q`rZ>ׂ>jUв( #h1%g}2bdڋڝLr4oP~UUVTle)/VL2*Xh\4MfR7\ 8:;ժ ;?"=C.JUFqPXNِL+e\`׎+U)NWZ`O~\\?RaJUpuz&• fW*7L+Uq*ӆSUJy&\Ap2ܯ;͕AUq*6\}3z̤׮ j\{p ґ,SW.pСwpN+%óJպJU9m'PpL4fuv\JJNWVxɒƈ3)QĤsέjEM< }[BYN\~a췭C>fQȊ<4"V,̒|6?Qڻ7]%/3zg䠖}Z{%wO*өr. Ƹipr|2eJi *L,>L+a\Z kǕq J9 4R4]6] W'؉WL_ဦCJ4x]`6\ 3+;O1r4KֻJUoWCW 1GwW#Lm8=B-pСh,E\OgrWW*MT-W W'+d;o/Y&w\Zo֎+UE+}Aq_DCJ4 T-qU dF3ڕuwW;i7+3.j`>2K#]\rmw\цeW*,e\A~XqS6\ .4r? TskǕ\i cJ38]\g wWrm]m:%as]MsҢɝ5Ftݘ;s4Ϲ,9&yCϚG:Oi@kuɹIpn[WL됇 WxJj+9LUrX%?8 w! W*YpjfʵpUp LhYǵJUl:A\IH.p4R4M1j׏+U$qŒt'{ʵܒjox\JpuJ⭓p9M+S*Ӷs3#!7-erӑEjB+Aoz[bDReWVq*purф&• N2 W4R6W҅ W'+ >Z=oZ mq%bﬣW #_]~T~;s,cldK]>1[q&Oy:s*.P-xHzT;A82w7?Oۖ>ϟ^Z./.vw/*}opxv9g"v~A]YQo?D y!9A) 6;g9u>3exa߯лG<"Ww6O,?731sۻU`A#EɲխƲ\GXVUW_aJͲe18KJ~GJ끅Tyʆ  ~"\Ap$? Tr+Ԧ;y=WSUH6Z3TsU4 6^;Tl)Jʹv'•={-hR7\~sH/rOoZ]<~wOB[ ћyO7A_O.+дoM{F5E_>~]{f޲7/..k~EЈh+bv/޾B|x_gwWݾ&ynQM/H.* &| W|b~Ǔ~?w3n|O^`yׯoKmOm4⌱YACH1ʆ#Dk㲳 „&AKh*ƈ2j9p5~gb1$D!@,^7'*_߽ȵ&Jպʄ֩ (JKN[ODCH^ VgUbz>kC-8g:RIͶa͠ɖ쌭QS<$k<л9Ed;{cGZGwH}f>e Ȩ&SPYćkBR@$@L#// к$օ&%^O6X:h^T+Eoh+;3LFrryT-*3ED k`]YkpH! sIRZGvitoɂ|Y]8VC !z)-ϰ`R< ^ 4sPxiGd!9hD8Q}JhڕȠ-./d0-#1j'qgYJu\Z}tgO6nhJ.SQ! |kXٰQGc5hW5w˽A,U!])!U^:puTAYT̻A6p(:\K"m:X <ѢEnl We5XQc.#k) tXGO=58!خDl1wB#7ê0pƺcۛo9( ! (!2ᢹ@aLy>:t3w+N ݄cb00U:T߽Ԛ4B`ɨ`~_ 2a p ZCQ{+Ttgac( Ls$e丽bQ g;!JP/uVjBTBA*1q/QvuT@o{m!3 f^$*{XH/>!`^P)6 ˒ 1P`׺] ҰK.vPQ'q%P4yApB+]ܩQ{ۋqiIb6|Br"x>3U}bZ;L'kQ /q}?'LOp?aOoݖn?ׂ`Ph3_F] hmFLf3x@\8F<'ætdUɬ7ڈڴLUcX}LC]@NbYP|E ]!x +  I"9yd^W1P>X.ctXzOG3HP$ m&pTx+,@f60Ģ:Yة׏LG}^ yǦ{'¶lzezA4D~.vy}]xvy'k7%jU'XqOXk3|lDtHc.x, A Pw$80>n057Pٗ9%*`!m,Ba},gfp`Y -g+B}Sptdp;=zJ8KLC@gI%nEK .D6jgC1IXU\< |٥`vJ`xXj "!2.dד4\NF 4`MQ~5 ߮HTv"DoaL`@ՀL47:٤dHlsXP)X˛Kjj4Bl$. 8=}nUh}NҸd}E.ozMy03/("n0I| dN0s&EV{/ sχu@`4o,fUd6>gc.~4ci7Ͱh]ܧ2֋=oN_4'~6%r2| !Աx>hsoJ -ՆF=D>BS?B7B7+-sz: HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@/Ud>退v:?Yζ6+9"u@sID: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@z: }^G սt@/Q t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"^1i`k{Bk?tZ^-t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD:X/V63_*F T3l̛/ًt|w2vk|3cK. !y=}o˷+YG_ztHtXһ7VcSE,7j$DX뾖B&jG͢R5)y(uƑ>^Bb5RبMC jN>g YGF亗u- M6*1D=f)mq&L[ Z,UU-Ni`,ѷd J>'orڼEB/8h0er t2_ܚGc'w:J=oz+zɡ?}Z=kH%FrZrUz W`7pJBk/<\@2\?\\_  $v"J\? U/pjupVnDpr }B@UR\ݛ+Uk%WV^i /p`4VR1m?;\`g{fvu/ksJ~`){!zhser=+4ةE_a> Z z9p%pC͛)8Q_@pNKIoOMFeeJ묟Q0bjwoG'šYIb1Xҍͪ&D}Ss5*TR-͕&m\_qoϽ#k郧#p} ZOKMtNGy2_^\3ANkI:~u7^-n[%?J[Ye0AnN-Hu~QE[:rж|ŷ_-w;}n7,PV%-P'µ-=2up&r;S^xG ? [`H;ߝvnGەX[#m54ΠWO3y[`{C\BZ}%ZiH֤Jlo ՌZnʭd3WF~bdPn iKuw V[q:p07DΎQ,hvV@\v1~Erwݩ\v|F؞Sy]ga>ӫvS·yx5J1BZJޞ MIwr Sp<$ gNr;~.jO7c(0*y-m.n693nn]̵PwC9/b'k b~>-?-& [y&km~|4:$fau&LJ0~e8^x#Ťzc;ZijB^a^2OhWlTJ f)=HVv:eM cJ:Owswue=-F9Mjg梘xܲtGv{悈"Y h: 0*sP})9*.cN!Fϕ*\T˚y1x0vf턱% {!(r`ߛB>2uƫ^8웷{Gܸ#.G-ڮlweܾ}@ՠxMF*^|0*%e͒L*`(ʬBUbbLo"ȑUN¿5D5P/RCإ:GkK]Gv]s@-吝--&azѤ#gzc8z~Rep}۬db5{ʲ&6`rzq6+ݷ;">9oפ[;\^$'k\ ׿N }`V¹U‰9 քrHOg6ikɂWx>N8[s12ܫ8$<8]*ؒ 72pUR]#cg܎qΰ3x(P>)-ºϑl"Y; 0CMVy>d2ƚ|$ĝN[:Y4{! OR lJcQѨl϶BlMMd]Ljݙ9#vVy(ݙw j+Bmb<^D-K*skn bZgb U.Y50)B966@9D!ر$Y,`M.,e2@ sņ1vfxOe:x(ؙ<MLjh y|0AHT'~͸L9~fPٙd#k+s Y%5>-lٶsn_ fIyk"' hyug;ePlW:u~FTa d@SM4Zlt5J4h*;l  hfyˠ:wsm\WR+$ ,,kY!QՃI˝ږLx9Rr{ݭ1ok 9Zל$BR?ornxw{М-qk ܹz}b5ﵒ77^r;u~N&{[]Kn|S",--|z׻Wjإykumbs G/?+6rbsc±Q_ŖԵ1"KUjpqouVwGONJ9}}O<%dGql}fS;pZX5Z3w&ʳxU+ 2 Y&-s!W_V|LLR;r!XxvP2:T .(%v,fy͞S:3玡M&^#7H;FK)Z~69]qunG졺xnIIrpκ; \LFF!GFkRgZ6B˹(}?Ӟ7-cu,K(%qwE[)[N$L˝C¢bFkb|eÅ fJ.H<;"E Rr2A̰4N+!H&ȭݕN°48P<bV̺>RD81Px WCwAj~UIR-'Y)v5=ִ #ymUHBHP oaQPSc4:FjޭE\GTTPh%uL;$7 PE!)|k7uyiwVq៽a F '0[a)`OwKiS7733&CVqRBz\HgYdA< }yT$O@HD00ctA LQ ;<( /@J``wsiC(XR:..*l"W1{݂\ wuӴ°Cbߌ du]$?e66)[hjZ^oU_ʱBtPZA --|1I8},E'ON0,qIveLujLkkilzV_8^fL1KgRH6 Β~8巘A1z 7.!byLpT'4i .}/'h애{צj˱x<:|~M "d%qp6J%ﻅ *KIv/\Gvd~^ʒeE $U_`FE W?TI}Pߪ!g$!yW??N)&}^VuD&ᗇz׺ƲyT-&-!7~5BEWVI:(r/eM1weֵhN:~ Dy0Y)82ViVAJ!#ȇ6 uSH =aMH ̬"\B6xјI&r`G{L*WTsZq͹i@b%31!BTsDi-g;#wO} GmiۀVƠT@N2#d+}&1$$${ Oo |Jn|2H_xQ\z.FL19F/ϩlF.A8?~0j-F>0*+ʄ%eQ,@|VnWYX~*e/z%wa .Bڤ_6R6½MZ7jkn[N&y1-ce vqwեGI^V=WL;`w*3/|g: JJ!0Qb/2=A1xAx2On5r׋`=͇!}Wo(U^75\0|:b gKy$"kY7 0etJu-h6/yg1~ v-Y6,Z/^ǚIYAs>U/W+%5*j3b+t@Hm'g)O=hQ b]c;C`*79D,Δg![$n4r!KQ, )Kp3HpJ_,mRwՒW_߆aE}.KוkJ.OhM9 i7LAAEbAՋj6lU.mN|" P O/?ik҆/IQN)93Zdq!-|aUPVnEY7/t.q0<3еvH뉐B!1a+92MQDAKM?%Q"Dˆ^#ˆd<)n: p+&ZH0<mKn!FΖέk\>\~ ~.*;Oj"lͶT4CWk~8jR1ޗOOԻݙKh+8MNEyS졐,G28C PpD C)F1Zle4wFm6*GT K jC0Ķȹ7⬯Skl+3sS:EUwY4˧,9V [,G+Eג#w_=#QPGH@FmdKbmozf{gdž9,v̰#Y攺& Rw+b%%RDK SX]妴Otcз Emyʼ_jO9j{|~8󏨇3( :yg} \f@@#5\\Tta'!zw O $,HYEB"B@C No ̢F"57kbǒE*L<'LI5$%iqJ;Sm4$k 5rn C>IGs\Q:O yw]?x槕Z^ֽT5A ?5g)OB&}YQ˔"HMŰzDrb4HM{:`A8O;g4|N|mcF8p2|r;b s lf Y)BI IV$O Һ8Ӊk%dzzV2N$Pi]Hzz%b G*U9ql=̅w֙d֫Тgmn#Nmͮoo'-6xbG08qe 4V( 5% & rN4&A)DFwbb^zOWc.cp pŒ`y֌N>~}.zdIˏUQ˯NzfQ7CY+f}eaT ;*gV\1*s%oՂ $6lE'iAí~{W|LZ Ta^$eFYXJb:/$aL-U Ȣp8 \ä4go~VUG>D3L{'^*ET%6['+S w%(Xd[njSlSCf<02^Ya%J}czhQΈ_aw?$o[^%z,(>Q$ezg&~nZxJݵ*u/Y73Cʻ;uBӶ[ٳo_zSn~bX 0m Z #B\{-y@nwbjIMy{uK_Mޓ?}&8{3/=2nPȽ!twnS\}f6 zc VQW]yoVYWZth6,ny6|~$&nu^$aoǼ^PTD# >07t4WGykY1$"ڰtZxꙛgZH$ZQwTc,RQE'Zzi!/t6~kTjomm\1M⫡j.eӻP|E\X)J)=~j*of?c?~|6h!ښ@r c UDAի;[UVւe˦(V3{pʅ-|єW .2pvwmc e & 2qc`^w-9m [6xlD^?㝨SqTRk^h$T3ϝBPGՌA8 R;[wAaGS G2MQb(8m<~1_V]OdV2)90Pr~ n4 26V# R;gނN;|nglD./D;^ ޵+ٿ~vL3M`a %x[-ɲ9%[Q7GVJrt\ K),-8KS_=sҭ̣gs{ .yxf"i_MV9HߪlLi7LPrbҲ^գG02S^gEG-RQ Lg".UVMiq)K@{Rb >d0l| q6û^i. qሁxWY"|P{摊 Ol^IVx Mr`5T ˢ` -KJB8K#/l!L&\KZy~9dki_~ǮYP֧wՂ501=^{Ͷjn0÷,SmYT7izqDqX1sc=W-4oGy>A=ey 昕'[C\^ƘJʘuc,^ށ[ ݳwCdmOSuy~|*TL˩&{W1|+ h|.!<㻪*)B Oͅ ܥ\@U$rrOX)e+L{T(e²BW5ɃĔ|- V .*:j5O &%үLx]`#[1KD4y1WZh}NtPm!ADZ:YM1NQ 6$ XbkM2DVۋk9yyhxg m<`ct}\w'<\넻c3isq́CjPT21QRҔdD[ݣPdDF%OTRu;Ǎ9>? ZؠßXQ"%ZBs]N@qFP@?,C<cKl;ha<"(d N;b<T^41U caч)GUzg7M9ݪo>떑MQJc1L2<6Q/ E#/IBaDknv/l }>$<.ثu\{y-06P r!`-jCV4h-5uw!J.[Wڌ%4$+hS 4#+)gAtxn ]\e]Zj(MЕްvpJ~X:vt+CfSNe 7J)th;]!=-] ]1#% +Xs誄Bםd-]!]q\ ŸM{}wNn&n*b A@CUˣ%? ݝ*fb.ɴon;TB !ȧQ?V/zzAq -*PcsP2N4#7;ϲɇh1_qw*<7:z^ĐJgy"Ϊܺrs)x&ީg銫GusȲ-ޏ5\prȽ1hprGK+o.x_xZ¥zky7fg+ReTհNr&2"˃x~ޞ=ijNØ#s ،.]pYAթ6Q K[Kr?9dВsGr B9'kCWhR]wBLtut%-I~~]!'=]c'B -]!])+Оl]!`CWWЦt(UKWHW*!DWkXS ZYwtuteѺI+lhs 2B1 l׮lȡ'oxvhj;fڕقLKWzJ8KB>Na;Zԝa-]!]1"2 +,Is et(y]#]q0xAtc jB֝%%c?v5XncXP ?.[ ޗ_\n:yM:hp en,H^ 6O!)Q8}څߵb+މm/]AD:BY.3r-%kހjހpnh)Q9FAR&t yWBR6M+Du P*"[:BRTl/hWJB:r;+Qp-]]iœߕl̚U׮-]!]jm=ּ1tpm ]ZYd#+˘M2-DՍF]o;Ж̞ j;vh6nL[Еm]Oi]8zj_tpo ]!;]!J)Z:BbR.DWXOq >ݢԆtut9׼I 61tpmc+@k;]!J[:FRRk֔k׊;. wД+ \۲M'+bNṋKp+*aj^' P HiFLW.!Vҵ6)Bg ݭ9~ncV- QM~FX lWt(UptQIz2R1thM%]ٔЕ&R7| P*jWHWFHf i]\h u+DYZzMDWR6|원w+@m+%d:0^`3 coЛ\ƐxK'dz0LNz^ϙrAxAY^hrcЎt*`q<,@';k}.Gaz_^4v$+қ@??щ@ NA8ʝA9wCA) QY4$)xXD-Ylt{6Y30w_odzLnz 2rL:߫?c7"e zTgacꍇh֟wg_}SfWE`q/vOOU$)I5E&q{C )L&gK+e5I&7yO~P\:&Usi8Wb0֋g eT!2i5e, 6 엪}kVm6]~8^ ѤW?~wrSfl0o avŲW>x :2+"h*y孖X$7^05cҷ^(kVn2L9(c0[0x02saPb` RET uxlFܝY<C&gZE m^'mp"+F BT\h7%ඕr@-PxKXuB"Kހ"^iֵ9iD$alA?FSSMMRDi RR֐\sPBqO J4;c%I3 RUo}(aHSLFmvz(D1j&j= j NYbY8Åjqror2z ?5cu2r>ٻr:M&wdUBShӏtժC+t~OXEG)&t=!fΰn7NVH˫#fŬDϪ SqFFEN*:Ԁ!-F &HB+;n2WE3^ubſF`vۂ:OH,\$VR{M\#6dp5[ϟwi{RmЁM69q!^g#WHFm+Y=^NWTI0vtڟtFoAT WFj K>F}vNßUe‹rlU1 m } RwqY-7m3$ e dMI:% bVÒq׋ܔٻM臭j_ꢒJ]WW |SIq\,a8f SqJ;4Z˔e%r>u Л!<>^0M2g7\vB!ă'p|Y%*m頻[UޠJ|ot B7 g?yqWo>9eAL;C.,1;~ݼhU4(Al7r*)Zj/Wd#rQ\jL.S]ngMk&#Hn# A<Sk/')VP!" (-(jwkTѤ_,\+sCfuZ`ނoX`RN3&BTr#+_2b^6i;nzKD$LDD3rΕ,Y?:"OkyypGiX]S0TV91騵`0>%UsO >a䐤XHMLc1>"'^q<-fh﹏gJ_S`1F;I 3 S45 &Prg Gz{Ar>3%Oy3}b0])8)E#2ARFH<`R 9I_a2 )d4&jci-N)\i<_J3cN+9נmc -V2 QQtͱq6{}Aw?w5wu'r2c[r*`.9曝Q?z㧱㋵^Gvmtliw F?Tlss:3Evכ\gA2{BjU4 ݟO<ŋv`^1T)Y  ^ [,} *K^_Ta;iW{Ӿ/ Y'ewaM\Z@Pn>ʲ姡K=ϮM]_oݠa73ɾRCt57Lo^1+6+ԋ"qKV ƹ4Br(Z姊ѹa.zǷع9\pVuTZ@jTO8F?m*G LpG˞2壥ORs:;bR1Dlϑ:<̾i(­Faو+6?3qq '2 % iͶ)/zIaEn.a_9VD)  KrY;uØ|Adۃl?ر蠘0S,^"DDϔ>j#u\ZťFVca0"ZQAPn&x92J#"(㎠;E+\$JKF25qSVP6x^M6O`}uN^@{_9^(ݟѶ"XŸ`}h1nw @^uop`QH4b99>jX :J\ntqU鳋BgWZ: !(’ȱ'?|<''}1nk!v9ۍq>7uU^N >?e[Po;KX+8M\kR0yS졐,G28C 0pD C)F1Zle4wFm6*GT K j`RELam}Y<5{ &Z5`SF,Ʋ8ӾeԡC-]N*,r<˱SU;.o(symCkYLG eZ:%R$"﵌&Fs+%edݚ8;zZ \. dMMtU6sLA/Qa[jX̴ZPW|,&&Mqߍ=Sk^KMZYȍC eTz>2\u;_Uټ s(]*W7W9(T:/PܚeP2y=ZwvS>j^(y<$wSs}˘[{-Y=[r>"a7vϊCb}P&_.sJoan[Xi9+꼁O eJ /rƕ}zoU"ݵ?!+Q`DIp)V3cJ h(89V.t4Tt(!zwS O$ HYEB"B@C00DD.Kp3bJ")R`Y,-,xGI4 vHVklW 9irFhv.-ɻɺ - 6ڶzv3BPiׅV7jgy+k(pF)l6Ys6Hõ%:Sݩǫ}Ӟtw/o F+B4W_w囫웤N'e:_+NS!0o?+N=~S) IΘV9Q+2X-H oI};M on^?I}d=SuRA`Iʔ03Zx{n'*zOf%P1Q߸p1,]Fa\RhWԪ5g2{s5W?|קR~(%^fdʫ]z [d]UeVeO%+ 0QA.<2.Iſ3%1p8ѶTfr̮BptNUS~kc"dI/+&~^>hgծ;<bof o=+O+W]8}um̾X/տDzz_7C[-L |4 Eh%o4T6Dz2.Vָ0F@SVZƯԣ{~(tɕ|Ȉ|Ğ]kBMqp8$)3r_-q<{7E| J8Lg6Q[Ks7 Ɉ@u#TGBKhXbA{dP0# 7wUӮ;f!hyx-7[9cg$^6[fy83dU F4XF -4Ye_$a㝧{qf"1Z^k"(<&?7{F^ a] R ID+J40ҁi|% E/.:׊K#wy&сگ$-%8% fTwt³U8zPJI]DETs(>V~Ŷ)qwFEka\ih1"<(XR~..81Մ DAՋ}vTVeo;[hQo'iΓLr5r9B``"'n~ܮq쵲2aD%]P5; fUTsc%bD|CI6΄y*V>27a-1Vkxc$a`pS-UÍ((~*ŏp;Sk)zp L8?atT1%&-;3x~% L o))s0<+%qJ0Ⲃ/^..zJt![7C˸l`y,/g0ݟzPf{]f?Vʧ_+fVU~Y?}̾`Q٨ LA1;f}VmN%ׯJL,/y\v*X$[YV^|ڿxD/!k?-_Ba*gRbrOAs_80E'90͎p䁰Br=QURyo[=0DmD2r9ȨNke; t8{Woo|2W+  ߜځP8Fbt)DK/uhN@L|DhF il n/[WMkA^,r: b1kmy"w70#a?,٥s-c)+] )np09sH[G‰uXTgbc&ﻡp`؞$#m畞ö(?& r B($<3`N(RFXp4>~;ʷxN:||_Ng6daZ|vw TiL1>WY44rUXSW}`x7h%?5=qN縝֜:4`Pl:ʷB@㋦y9v(".Z3`~Ss=M@dAٞQu>l/Y-L7 pbRd[8Xda$7GQ6*h,O$mrE؄#I"_#{U ȇMH:~3n1MjEj/eQ5F6l3zz!mёYY nQV/nB^'v&Ξ!4 IA$q+hJ2N /.-]LFz!is]m.Wa'(X3DZmT2Ƙ_A,FV" 5~*X@ )n;؈(0#cC.W*y}D!X,l k,&˖+XA:s}ze铲<<:AV2 bGFS^EM93h,,l>`hwQ7o]ђE_C6WK" [$&Nۂ)B%{b꯸d]93fV~F;Ug68nl'] -E? Ѥk_/il64> 6ČX6j0\O[#[W"ڬ3{5EPTRƋB6(IIJm*YIg:%vY4IHȒe!,:f0K{rIS2#oRQV!bu&%׀ ӎ&N&|= `-u)L;MƵȋ戗]risq#c_ZavU[&u뿺 7QC4dsh T+z^JΝߣD/ՏD/! &IQCȮ8oe#|QBDBKAF- 3' **J[|:23֢P 3T3E#ubҘ,UDԅd(1x9됫ӝ/d)x`/GU~y4+1)wڰLJѲVs"+N&VeY)SlZvF[Eɖ(>e[PV(dL ㋶lv颥 n<ܲJO+j~}N< m`u_":'6.=i\6]\^_ƻd!yt䢭^ӝdJmSzjlfv{X(B&(\H(TT+3',&}"C(D%FQ9D*Z*9PLB&,ńSPZ{ӚPF42vU:fơXh:B5`bFzoʌw[/֟N_x_Єd~2/mD aʳ xs$pzc,QlwmRF] ?jRGSZUԮEg'(5&6>JE$t%Χ;Φeb;}Am3`ﭶ$J[ 6-H%QD!' 3)u߹ ۭm_l)Zfmǜt"9erLZb{1Cag|[~GV/8]LjDLp6LuB <ԉdzifL?eL d +bQ ဵRRU*D MvISbKZIlyLjؙ8{$#Y)W|͒Cq\pKԆYOU@0 20&AH`[R9͎p;<<#@ئmך_]F~xG\]tApCE?zpGM 쏧^},ݤݪݤ ݤa7iżr n/9pn)4g$a`"8<(%@K;g?Rxwx4a{ԇ"<;gvhju: )Jj\ȩ*+oo<.AyX#Fe$iOwEF-cf"ӥk{+q[roxw61C[v-ie.Ӹ_ MCM6AFo,!A+y::nlpМ"Scr6Q #]CvD<^ƀ:3猁{܂GuGݾ3w؜ڑ-sP+IgIH¢(t2T0FLrŤ-* CR)A?8#dbQZM)+A* ,u]3q1A2lp-N$O~4(S>\C9; )P)ю/:KUR.Q$.Yؒ5Z\=hAh(RjksI/4HdkX9txYb q:(EF١5 %+٠H*aQia}]m 2jo2E:07ʹ!hF vPb Y6'WQ m\_|6>/FڤMf)Lj3F?<ֶk+u<}OXpV]fu? S76ViDY3U-(^' Ո-Sj+kJz̉D6j$PRH~'58N||$O|:ȁWT2$!r 6`K\4J.w隟8 %{y?do" ҞѾ?m7<;I8^}HgqJd~:\_֭u[uM|y~CN=x5VgMuQ}y7W7^l;q0oi_Y>[osd4M >}Nk#Ď;G2FlO ֧XWtr~1Ƌ{nΡ9]W|s$:R<}:ly4+ Df,;/qsіjjoL<*ymqneuynjT#Zʈ&(.Jϭ:,oq<]V?1Ho{y./?{YK.}^h_[1W|qwBe;nzG-zU7 M6ݴ*m~.o[Gϲt?bIr>,,V7̵E2E  L&bpa .M=ѽvZZEW"9aA9Amad"+Q>R ÞN'{:󼸺KAk6|7;;;=/PYMׂPwЪ*=UgV+sfiVMW.T:_Ik B-:Rj f( Hk#{fʉ4WkFFpkm{6=3qqm '2 5a i͎)ϺIa`\¨`>9VD)  K Y[ĪK- "}guDń =e)&'"zQ*.5:C vG9@qTQ( Ȣ`;RrX+?Ii$iwL]NPPx19%$ҕb }r>PfJl,M>O}0Rڸ+ʥv_Wc+3 $(< )s.3"R[t*P9'WN( e!(’ȱ'lwL`#Fk(5vEkY>e]-˕%|)Zx cjOrQN AHRVa S띱V4D佖豉 iȌ;!߹coDe3a IE#xxQθ 1s4 +3ț>DbIqE7MfbtgRBd['Q1 Fo꼾pOEM8 tT97:LN۲Pn>dt[F4LcSoo7 xW|}fu$\'9qJ6*kˮ׶rh,2d֬:IrwΉbk>ʅ.?.뻇7t:oX׮ͲChY@ˢwnx|xsu=/\?+.ڻ{f΃B.GZ>q= O]ɛ0۟56?nM昞^Wii=mOi#`/I`&!QuJH1pdr%'y4<+\̾/?JI郎$b9nv:{໛[d]Um/s*^:B+3u'^'>B}˚C?刘>th:tgr\SnJoe5ZLugߟMǡӝd0u+QU>Gu73}^'U]wN_|nK/W*4Ff@wV p /&jBx8~1ك_|}*1sJI N˜o ]r%2"?6'c6rFf|`0$k'B*~EV. /p kuʻ!yu0lUX:,:aJGEzygox^0I!=seʃY;y1 $DHSl˳69~cq\3f=gH`Qiz@Zh IY[OMĘE12e|ɏn t! dBj!ihEVXPpUzY^.^0~c]ƒKm_ U{t=,ˡVqqgꞡbڈ' )Լx?7{~e,qtq? <E,!]Xnu '1bePm$pb="yn9d62AAUp;oB!gh՞k:F@sz4‚Sa+3Yϝ4.[*2\4fvΛƪg_|^vw ɵHbG17J*0aˣHFNX"+j[fKf zP1YmJ+{VR_O_`#U1Z:@T3ϝBPGٌA8 ---⭍͗vwS5]+GK(v^ |ۺc4}SS+=:kPT!b|3=5clwFFΙ:3;UsF,Fhg&vS 6NV<ŀم"e6l8XH EryeFJ4G>%cy!a $0!„c*5 Zb"Gg^6q>צ'ze'Kt]Kyz3ѳە^k\K󮷊JgS؅J5OAYO4EL!piLåҙ70Ja1*0G/Q!$U7ʍVrIWd^ y)p1vb1hj_`Ո70'y8BT]4 ],X,` &h¢/zќvi0ZaLLHsqZ"1,SrV RRⲅY_`e % cPMbPA;Ȝ1};uN%EdI5\gTejJfc . DByזRΔ+\_53I%9>\Q'sE/.׆!i:}C28rW@=Ǩ n# b|oQ'@>`2@^&nJ\8OtsNw˫ ;(2̟+D$83d_i|Gٝcgt|Z5?! zUpmNG*ѦM)\(ks4Տ{7걶C!߇OT)9d5H-1‘-W>;S"M@O%~z9Q>Vzn Qy~UoZkv[{Ᏻe띝^|8։Y,qw.O>q啵/8Ҹh+s[YU>eo+Xx=ZT*:ԠXnJbYsPتrJ>KX2Vt&TP{ƦQTnn!ǓWk)ЦܘwZ'&4Ulܗ*q[qVal 㬓I&8d7V[+,},//ړoXX]hjeI+AV!ꛕVfvO*`%Br臚WlTYgy֠9'`qoa۝NSo&~nd,ƧӈN]whVwˏ'7"BֺǮ'Zx>t{OEr]wDr"%j{$Uc8(j \uqpՒW]^puJH`Ka\u:kOUWi\#$X!?`kU+v\u掫2WG+#B`0ripJ;-.:\e@` \;̤֛ *=/\~B+:8,#j?rT:?/\=p\=W]W r>a?掫WG+ XiW]7 rE(jW]xpeZQ PTr0: sUWi‚cĕXW^d9yv7~{lXY=iC nyIUv*",JX҄S5qzK//Qİ7!sy(^[0waœ ]7`5yi̫ a\An 5 گ=t\uf1JU+62Qpպw"K_]#v:+6ZnW]Z傫cĕ~M.XZW]sT -:F\p<|mQpA*jb\+7ϔo?2վp`pNyWی"̂= 3G0)+f\A-;J WWl .(Z?w\ua1ʰB`jFԢ;n^puDK[kg~~rzsWӬYV x^ldjbW~GeAdG94l3u;O8~O6[<u A.i5J ղ=*v0}@00ΨQp7w\uv&?F\!!WW޻Z, WN⑂A֎ƻZֳUWQcĕ+# q\; Z=J]#8gFZ@wGUW;`\#\)h;'[%WӁgV}?[+^pآ'^pEΨo\(Zvvx]puNh)%X+? CZ掫#ĕ峳98s{k=Y0HE~vn øPh]6O[o~zz'fyN =h [SZ^tn^fN.t_c abچލ<`;\W]vOeWG+ l +f=NW]vaOWG+oZ0 i~r0j;J #WD# qfQp*G>huϵnEvXiOޝD7C-"ix_n.<_6F_=_Ok]"i?E[{@ߏ^ ]נwɺ+xY|z_//x>|έ_Svwfb鍺Ȩ?p{~(?z +>SUi~}vwfO^\a&5'wm9←wop6-k? aF9< j0A3m#eG-<}5g;yOa }D!~>FB}0K U]Ob;?S+o˨y I%01WHF ibK1lT&r>xTl5ksIڈk!r\tɚOVVXj!N´ =[OBʌblZ b6% oLʺ[̜UW &3ʹZv$AR 9] ۜq}qeXX. QI=xrEAVYKk޵486+Vn=. vOpO+`~/_닳럓1 - 9:$Xy%CV|JTkĹ.Vpn@VerƢḴ6g-s)ɑ J[5j"\ VOp_[MO6n Rl'R#8u4~D^ko" ew]V`$eP MHZbca!!s`UJ68ҒP*S, Ilr6UqSFVMIU.mP͸2!13V$b\d9KF:*X{HRѺD va*VѡLp%[^)ŲKh[n{Ou*2n!S |k(P VP,LT6{iFn3%!2tkʃPi]Q cGP1S|bDJpPR Tgyu>A%@Z\ʡ7ud&t_PGbU P Sќ )taw,T BQ[RH6 NB Aꕶu2R".5=jՅ32-(Wj%$dA%E@Bi5&؄YV+Eb0۽@VQ=r+P"КeM:^@gmDKܠۡݷbF\*"͘u4j>R ULl^*0'Dh6`V9݀/ e~Zˁ5zFyUA׮z|ou/&>H6=DF: @uKy@xc#qҬd :QҕjIJ%UQac*$OpHv9k=ė,$tF\8g Z 4I"LjyU(|ڄLkt8XG4'*d 2iPB [Pvdox6Pu~JTCՏD>/AXŸ5YK1e@TDq*0TF_.ss}0di"YrB,GSkA!%DBCu)}IGQuD%` Zc`8B5()P"T2TEx(%G6-ѧEkPhg]T;V@r.QZxE)-fT*B֌6!XX-;l,4^  DhRud!.h]`mBgT@1S(ͮĄQB;䍃"2Ϊ תPa2,,H1#dlDH!c9QbզlNʽo?v XI3,QMj4ZIPYITxn!m@VӥGoU4^P^`QZF-cTPVm y5,a* z0dkR@q9_tY͆,Hav5[.ڞs)d쾪`(ԭGwn[`3= ]Z[3 n'\;-,zwm5EZSR ΓDɣFCkmbL\Z~}7l}QˌؓUÁvàGy آ(9)l=TP'ȍڪ-έGr%5 222CtQ%fdi SAJq X 7n#1l]`V:IISL1. trS(;* wHN_*P C 2=PdQƈMnBGt00`h\tFdUA?Fт9M6i]0r0+5ziFz bJ=3i^&PLҠ@Zv!;9yyeA^ Qє?joZ 05^%pgV k D +PP8U8 AFWH 65EOzʤ蹐 ƞćb> }@b> }@b> }@b> }@b> }@b> =SP~`FSb2> T|@֞P:>RA9}@b> }@b> }@b> }@b> }@b> }@b&^Ք|@ZbNOZ(  G}@b> }@b> }@b> }@b> }@b> }@bx)BMRND}@"}@5d> }@b> }@b> }@b> }@b> }@b> >XXXv ZjJwY\VWUgeyX?,gJ%:pm h;yےiw-nlK]o_絑ONWFtu/HOCWC)iѕ]YvqBt'CW )5^:]"13+uaBtM ]b*tEh;uJuj{X0]}θrͩߵt*-j^%w.jQ_g 9[.rbZ򲕟LZky1B>]-0_08Tv?p@wxZn#wBtlkݬ.c~K͐͟kchlsg7Fe£zGϽ8꩸5򘈵RG>Li] Z<Scz-0fBx^4\'3_u0@y1#+ghfxΦI-م@"%6 %wԴ05(s&B~?].ۏV$:Y/\X%tA| M ͓\+Bӄ֝?^(թ=1]=e k-'CWN?y"9J % ]\BWS+Bis+G|fɫ9B`]x֗ju-{?%i(]:ho/ɩ)# ,jzTZwB -{.Pvv;;lQ*/[co [rhM7[^~Aם*աCU:ZzsoOp?z2'/{T.Lؙ_^;WyY7;7,?or؈~CMw}T3n7i{ns~޿wg ˋK=͝eαV![9:oS6B'6=q?ݱ7`PPPl[f;M {i;:~(EonzX q߼?+n1Yί^> W·irYB2+ Z~^򽖸~{˃!w ґ[KmwHt]yuurn֌1IDiZbtFtT2q%#wlzx) SY^w  C2K`MwNEAG K{R`le];8jM]myrZ32҅^HS={}wvמ_K1Hc&@ PKz"Nr-JIu%7 o4[ErM]WEܗ z v?6j8g^NpW; /7󆿶.GCw#hGf@W|~À) 0cb.`ĩn$M 6R}G7 'S4h7z)b_cLMijH]x]tiVmW};xrEzSEL,x::jCz\m:ƙ[#6̩oߝE ?Q_?,qyHdPse1EPƬfoMيp 7Msj岭WBTuz6Sm82Ԭ6>GURI]B9at_@%FƆ2 otJƷ.Q *JT^C8tgiMcFPȃx48kw;l6itԔ}q|O#-|"xcKդ:Y*5VIfnMh1*0cbT:J+Ũp1*^ڮs^./`C6wڟ}y6|dӇB nm-!;cMkx#lwk4azԣlɤfuQBgUstRWE![j(dU`icToJ:3m2Ck&d]cu"T])=-`m*G֡-!<%7 OF2u=M%eUMs)O&GsO$ :y?Rf4P=B@c7#TE٥hTC ;SUڝ\}RGqrO =}'QHUBySE4]6#bWٻ6r$U8ܶ@l &Xfd6QKI= r<R|9[`~Xȅ켱 IjȕB@e2MDk`"I۵heZ)Z$e24ѱ@CDh PpfKZ۝"g7ƫIICWůoO%0urP`]TXl@arr;DI^_ErRGbi7~zKhU*q@8 JT.H x{xEIX~^{IQ$WU@┡ Ґ,'$R)\B!նdlݒ=Қ,lme* E˲upIQtqCM}}^-y 7> +ؒ28-"аȂ.-,j8Q|˽O`hw enx8 s8b,|ªb"A%vk~45TvkTv`xa)rI+iƓ|SфR@W*2al׶ORih)JDmx(k A!pDR(Icvey9ae/^`<8T"%$b'&PqPP [-fҠQK u4 Uh %DiB%2MJSZ**y4!%̈́NSmo=_G|bY̙|Cb'S.R. i$dHc(' %ZR1N.C.Z`:[on-ґ;_6 j#v:}#~|GNG?VUDLt^w-;:(bHWȟB-LU(fhNexxGyTZA3vxǃh9ICDΠi(&) 3&ڂr1?F}Ckr5y+O#@ehUq zNH ds9:DP< "g2\>Cwi7f[]p~{{VOwY/4MX3cFw9Z@uր^:UA|4 -8)`([ / z8zG>8y DA*¸TAjooB"o3e.p"$'mT%20Z)15cZiSۉZ#g7>/8/J\n/%H,oVy"#e.\g[!{Enlu<fQK]aC7kcyi{; #'#'spB5 IRr\I(DBk|jm'f]K>|*Wx?'aWݝBf-g MJpu^Y[ۇzX47 >׬#$cyVicADf$rEx$v wVEKQdp#fRtTk%Aqgr-i;ܹ5rvsFd4[{_,}m:ٮ,',19V͠ˑ)˜1,}B *$foeg9v)X]'@ZAؓ7!2!IЈЊ&4$#Ad@Xv&Mu c ;ȅB<"PD1 R/ .t"0'HC3;Y1<$䂐ou`fV!܂y4*PP9xr6*gIl|5q/ĉ4 ^eta 3E&-ri5<Л;t>hz!2ͧhyYP*Ks* :jns {U-1gC8MCiU_~t I& 2 **"k>ٶ#U&vJU80 Λb=4rnq*lS0Ó05e,ۑvDs2Y>Fh\ ڳaJZCJef{ӹwm!ey+.j~z{8rGg%}}6\ܿmI˫w7>MGkOn_ٕvjray;{"V# z˥}7_=WÏ'INaU~D6m${y1N~,bRIp (30.*nJ aN+u }O(;o+;SA2IJ E5D$HpiԚ)c8q{c89"V:-M` e5r XJӓٙ]ʕ:ϔC*g& 4.n7w[^˝ βɽr防 T{`=5gJ˵ riN *H*:^XIs\A`ZE) `!dp"+dFS#CFZa'AcA8uEi1}jfg+H)2EjN-J]kG# g edЩRM0Ȝt>J9 *PJ[C"s,+x>%D"L1RtXd86~p !Yc(o557q֐I(Q[5]xV˱ ?.z޿?K?` .==_~+{x8seo9[-o[Hz!ѻ+H6qF0uKr../ i&Rv99zgC:o{cW =_|ZyqJh0TP%Q}z \ohyqèˡM39rؘo{w4W?]krQRyUx1^/\fQ^s>iпܮ9GzYë;炐= nXc7J6wX B*ףv1Mepʨ^bM6բ|^ܱhrӳ2F9=Q(7F+KAT5O/] 9m"],:FIՃWz8GM]_¼~J[pbXO($/ݫ7/.㯯._|2s_^yLc@UuD&ᗇ`ytח?5mu-F5ku ߣ_rC?"%[n!R4j%!6#a[[hdc*H PnCG"1E8@MH(X. pQlv6,KQq^;TFk\̣H, IEfJGt9 <2`CrݞN+{:eƧ/5qPhfe{&>tguʍ:v6.livSr7xp@\"@A4asdQ@95ʆ.&&9@4(="HkMD;M֪9(8p`7FJ`v3-d J\甓ցrrm4% b4Hp:=1BIFi-r(<7ƣA?ܞhƝĜuD9hB C8L(=AQd*#u2 Qrg )8t8%"2QON %֢TRJqN,$U1NĀ@|lB25e=6-2IxCKEpI[ق1-O1_aBD %ƒA/h>jE p@+=7 ȉM;K'k)m6JQuVs~bx6Nr%|ӫ?ÆL1DOP#xFh G6 %X\q~Q;/> (0*Jep,**RwY[g5z90 $ YGE< &?z5?[z{+͋ |q]^חo?t\nb~ŋZ;dX. GYf :`Nx3tX \:I.˳;ShDү ½&Rp}!Uu Fu Bwfam\Ho}V`(in_vxF /F\WaD!֑,Z;m8$kU$a`2F.}Z6+UDQC}[jH!_c>e)\2R(1fl48XX?Tn2׏A߯ȿ?k0=opX~]-c0ꀔO6-5"I(X֌fdb9$S&%IbTf9e*JjB7O1³0L;baĠA3VUY*Md VkLD0 MtPMtl˴TUQ蔱b._ 0S0= <-ə+SL*R e' abh 0ѷXsN,p^ajWY |.W+30y܏/_ㅉtCDy LddRNLbDx,:ydI©x!ִw2tژp $hQw %$-'$c)uíw|!3GZ F'L;}ȒhlVh1BythC:(̈́wZߘַp횬;TZYo+tv] @md5E1ِN*rY izwH4'h/KBxϩ4N?^~yُo{_h +!׮2(jV¯ËrJߍ0dV?ɝt}9fɭBB޷żEo\\HGOݍC4{?~;V??_o|z_=f%R,t|}6 Q׃[]~穎@s:XQ7t"b}_x/7Т#nxCcF{^UR뛺B9Ho'Eֱ3 A:ϯ[0/\1ܹ]nt_B5x373>۫o9JzRzG;ך5rj)KgaA{KL +cᎁ*OrJ i]=LE0]gbr|?.-_YÁaafjH uA۹0;ra\?[rtj'߀C2Z2DJ$+ϙ%Lçded\**cͅT#k>3V^iic$NF}b^3qpΒc5M4pxtժ/d۲U r-.e~׍^"ɗ{9ss=3>hfQC i&XbR3;^<Mx%?5<{Ǿ7Zgo@#&A{60{CմCeC>4ȜV\@l奷Uf tΫ*5pW2̐Jx֟Ϥe(I"B8K6@s(h%[y|}+0p]{Vgc\LzhZL&noKMm[xwގL=+O7pRo&U"&*GWU152Q;5gj@]`9~;pcXs̳ƆgA-W{dJ99`QAc)F@\Kd)ZJ Sk |&<7Ǘ 2١0;x5bD0 b=<SQB)2S[J^JmeDyk+#J{X[YSn,+BRBW֐=+CyQuz*]!0+k--ku(9jRK)ТRc]ZMi PB:z5tלz}H3J6l鮭[Wv7tJiEWz]m;ICuAt-'fh k;]!J><WYWWR jvBm`GW/BWSYAt-a¥et(EGW{IWJX~N;'&nE]xa<\Ͻ0p44t!]TDrH Pט2¨bJɟÀm&l>fRJEs_woWBL*ι. sB?- ː*)PB ?Ab5%\ujk1P,Jp+1dp «Qw}O^ELǷH6Nڂ 3AoloyF8y~ip"Xx=( PKAT';3fzqﻥ:6~: SCUL޿Ƴ:=`0I}GVV0Ə:Pb,[[pnb]`t1%JvF٭w)"$BCWR Zvutt%#`SΫFk RvvBRvttNR]!`U3p +-%%^5#]!45DF \S5DeGW{IWVh^u--p$]2kN9$*]&;в eR< ttSI(µt( vt"tŤJDWXr^ ]!\K+@m;]!ʶEK-&BBBb]#]Y sv9X :K~Tu O>6o@n';a֙Ŗ$XPD<1+]H!?ߵ6-/g.jVph\߹j X+7Yqtﮘ pNNlW |=jXDa嶘 G_CseoKU)wޑ S]!`Y{pU1thuWVRT 3N++yWhU(mGW{HWZ=F ? ]!\aJ+DD Q]>ҕ]]`Kʱ.eet(jj!ZjGM U]Kt(YGWSo֌ ܵu\cgp3vGkWete;v!6H9tp.-7m+D);Gb +,I1tpwȰZ#NWҪRSaiJ)b k)F /Fl,Sg#Z~kBcH+|SLMT\)յz6J'z,)4q%xĖIy܊lT)zȼ RNZzmpupv1ZQ͵Vb徟Mƭ;D*YhM%%1[]!`++˹-Vm+@)V+i%S%N `S]!\J+@I QκGRV[n +e !B/Ithn;]!ʶmSE t_]]`󔢛Ba]!]Y +D/;]!\]Ld*}MO+DdGW(Yo)9$X^f 7, q :eL%oR;%մmGW[Md>f%is7X8}{ptGp3]،X~?댳(.ًOwG2׎*zTP9dEYhGᜍGAC\puu%_ή~?9i,6N$́\O3l_ y:p杻ppFgMTHb9hI3)ୈSa29I­JnBS9տߢ~`=p^rѵ;:0/eSWY^{YO` -md1FP3~=W!+.2?K``]צb׏ʣ;‡⢣YWi.ՎZf—U8I*KuY+"mN` 49͝hF4ӳ#OsdV[|tqmt\f2s 19<jTr)K(r$)0`)W!, &Q'ֳ଩g^ lvХuT@%K< &GxOdF%,iDR|O3OIZ:q`;ߗIu(&>sH>EV"xyhka=U㏲7;ߦj#G1F!H1i늫YQ!cH-hLS,YȔפ(NP*'xVyя/ #7 h`;z\=3'] 1P 1֪<= QO셂Gb|^/x"㢎u3Q`*Ź xMr5}9:Zu(VZBS5mtRGArBÀ|QRU3&΁m2YxBvar.|s6gœIuVL{~Dĝ/&v/{HyzǔIGs2nU©' xiԇZ9>uIH,o(l&^$[m u Zݨ"*jI,>e%-:!dp];+q7qV-^/9T7&e)sӎLY'Π')e\Vݗn^zA]_ƫ'08̶*spݲv &$Vk2c΢`@z;IG1c~ѳHݳ?;cgZZq읭ʧQUrJI:8+k+c)Dl&XAj2 %H^!*T@*b28OiDNud~dKcie!d5FF Xv "=OQ}GuWQ}7uWWH+xuWNuWuWn^dz* ֋#MMu-؁e͠,7$4D]>}qj#bQ : +ZtQ쫏N[V,@P X!ztTrAL  ̔-L%Z]Q L%H5hJ u9pQ== u|4Їl<'3 b-7cv L$)<Qe*ҶԷ؆> hk+ h`žgώrvQ9j y;8Vy9m z\nCYuC%9+tIJyZL FN k)0y&8ޖ8O[~X_)^+X[h;B}/j {[^7d'f}qeEl<4~Nb;I%H(*j 61̙7ʓɹGﴪ^Zy %J~6M[6nj+w‰md}-\U bwigC1bXm{'hie+0ZU*:cN>S-BW1vC@޳h]_n_IaN!wb8!Vk! p%$БIDwQnզiTNگmُ+W"d!|nJ~/?twLɃ`4̟5z1hww||G7|Gcw<ꉽPQy>\h)BUgA\Ѧ L,!|r4pی*֤#2+r6Rdu_ dK-;S1yS{^pomGx.Ͻ[;nIVrl[#i:d^lz `wA ^F]:W)r[bv9\E=`,r qh|K))+2O0`7Uqt0VƪQdPs, ZeCɾmgrkˆcPZ@fط,ũh32WǍ_f.ps'tD!]/߯T?NN0諷%Ut&U"e .]QA&V}f)z>m9x$Lfa B[RH9?yZY?gepa{կ?4|raN+N򕘥s ׁ+v`GuEa Jk Oq ѾU)p,/E5쩉J1Mb_ahUhQv+KgBS/pL.O Z,ն=0ɶ(S2q%m1''q;ʼXPߔz뤚e^_mV[ /UGnENx*X5XjXV O"ߤA$vPa$1&7RhbPn nWMo꼸`4ܳ)p@_|ϷV7/#m++U^;M鄯O)y{xd8u\[!mWmhSY?D2zѫoǧyxbI~i&9p=/f̼3B9绹xYxɟx1{wHҼ6)_cSv7m1}w{TUi{Hw}{{_Pٰ.!0$@`3$* K5?T۞# R)  G:R^RԞQq3TbmxRX%lS@"6ԉ%2.^"beSG]./ d3CW"]!'HMP@xõvbY{M&2 &!]GO tbYvGsbY;t_Y g@⮫A0KƪB g)O/U52A8.H̠"c<8k2bs„٘VxA|]Ms Z_7I&f$O~1}a{Gڥ*Lg;3kq:)>Y\>yN*S!lkƨ k]ەhik/OW,|g =l<< ߚ-"d+FJD#8"7i@NkqQu3vq)k|me=d{kA'Z](D1`[ĪV2wZ/5}+t"kr+$} !bHT,Q֛10:MC%;)R_E::j\lЄg{8WH}E"'kcÈcxYASE<,+VCH45({"gzUuuQ0B>Hx 1TF`3HAp)~4c;K| i293HV`C&{B]A/?jzNH^ #q'pfg/2C_xMgr=@HD0{hD djo2x:@P&\Y\:$UXGbֺ9 i48/#:Yf. Y.L!DŰԿSzP*ե Γt&umT iir)E0)txRGJYYpǛ_cLᏇ_?`\RD&7ᯛWe[UC}uT & ꥢ&7j!ɼ^w QڀH3>(6^"p*ؙ ߻D[dˀi5J9)H{0 TG>|^7U) pXVѤ/XXۈ*%*d4&jci-ʭjWxwJ3cN+9 mcr[d&&D@(; $Jk)=8;84ʆƿ5hkn\o ́Oc$r_Us`=(oN}W*r.`9O؂QP{"7<@rFϵ,wi{Mr93{nX6ڡ~NI7;$OOٰXܪqԼSh:tS4d.pV1T)Y@  ^ [,} *Yz5z.vF{)6Bې%d(< "hiZ/APl>eɅgMC_͇pt9lLs)ۇh|57حK81p.6k1&_ZN=k ʁjSk2s&Ki RiIRU!:Sg2:*I@4S"4dH(>^ۥp嵙E>\5W#X'HlGz}սU=,5xI S2(Pj%ʁ,X.Vșʃ W/+` H:*bA'PqhUt5-ﲓ^b充 uyzRAe{՝wIXWPT}7(n3x8#G7YfV-Cx,0{2P{MxNTk?_|~1Z=ik҆/IQN)93Z&zGm}k'߮N);ۧ]#^`tŶS H1$ i/iMr3~ %7PXXV8dY^> g6-k?bXO㞲{A=SFpqiYd`!Ahr2r6ȹQjEw)Z)"VD h$-ݵFݮ ץ.%_f'46ֆ$%Uh[Jw"p<a3f܇,o4T#3]ntF&9h=gf{STHTEaMNtzYAפ5!B)DM^+Nk* 5VoѦW0-S 5 !kUDu=r/)>+16\}.rxrwcF]`%Q'zHSGF(r(̣9V.2Z&Ii.WӸ `x aGb"-"ഔ6( Npxa7]S,Rg2DO; `JL/)MS"Y"س%0l\ۦȹ[0 $Ixf9o^O@tT o+d;y3E)tU-~jhSusG?A!F])GEa hL 1+5!:`A8O;g4^|,N|kcF8p2u;b s lf Y)BI IV$O Ҽ?ԉc%dzzV2N$Pi]Hzz%b G*U9qDָ~#7r f] <)h8$0tQ_nɛ^jC' |'#wߺ*h6I$GRF820DbDK0`@:1b|tS3s™ƊcDۤ[ΉfڤdS\["(Zr8@㍓M1ܼ;7#J3;r6<_t<屢NzfQ7!4C's+٬ﳃoۜ1rf2Wk$H pWqbGɟ/ n}U| Lrt#S /Xyf)CG!&r(1Ng.-q,KEˣkՁ^Q]/?F>JZfJhs ]}^^E㖽2YkΆi tF!d,9xzKCOafD#fxlxq֯JH_>(+Q7&ЗJEHSugrԟe"*FuBU,![q;U\6'ݳZk}+~>_!owk -ba˴Y,\h)5fݹtPi*vnD-K9y`vRdoRJv(62z\*V[V$_؁ T{zc2(7cINi&+z;0WTTU+zkXRmX:ɶ֓4Oo_xB[X19CFZ"|iQ{LXp_3 wVVٓaߞNK[W-lg a] R ID+J4V[,>·(\1^tBf'Nelmnlupn2.yYaLpfN,<ڝV~ʂP.Z?|>y7|4?~<p^ٚ@r +غW&B= }ڵz X^?Cwgwngvwmc A &pAͣjnDh:vF6zI̧Ϡ7lvSٕHm< #bM ګ_m M BB,[JL.0"}'z=JƖuG +$#[%6Xӈ CF$#)IDPjdTS٧[OLژCflq˩-΅byoQoGc*FXKB['y :$G4fĐ&i6a } s՚B'nؒY.xߵ0; ]QpNy"Fem7 Ԛ ^})/s;[R3q҂c`bҹVpb#V):T'/jIaOGK!zƌƁZ1&+<[4~}[8}s})#6I0!Kбt?_+n[v{"5!{8 F:)`Np2HPAH}2X,sGP`+i(c˱wg{ hiߖ#2*ӂzTxk!`2rH8%/ Rd Ho J\Oi<~t`xƮ0Ky|ti &49Ĭ<.ס3%a3 n4iBҧː<9x6]чK^k#`w0I'q=b{~RQɖ<|L Z=Kϊ5s9?@T/z%{_p0Z l0{8* ˖ﲓS} B6тeoK&+_ޫ+zY3x8)o*߾ ٸ:9bWca/٘]fEĦqV bc6ҸST {61CoG狏jl߃Zdl4 [Vټ?h UZ )Д"O6_Pe69_\JK;ZJΗ3޼:T=dbp.-vQT+ٚuxB..%?CY4_UL,gx_O 6=fEl%.٧M,XlgūKt}EY}cvou7>N2OV{-Ʌ o@tFn.n-yZ`d!R7wVFZ`Mt* 3pͥ+p֒sn%W? \E<`w`7 :;\u+Ezj\ U77 2JkW? \e vnWZUk1l' Ϯ`Ͻug:k3̼ `_7xmquvWsGwNNԃ+_Ԙ7?.|$/ו8Ls105o֗)qq>vm~e;'GsZo>f-:*u{Ͽ/U|Cxl' |o{sp94/n]$%yfOh^*^NG9ݻȵ_~py}uM_^Cn=&@:C=6OXAd7Qt._F#^<է⴯i_y3,NŹ-SGݘZUbiչRF4֤X; 1 :$tԜ&c ZDgހ@jc\)ѹF[uކ=fLOvy8clrId\k/M-IvWvkߟK.?+n֐q~K_\͗w_OWڤB$dM.$eC|-Ib0Etж ]:rԲ:vQo#BщS +D :ˮ ךj =lLlھ85EfL:`TA0;=3R{X$)6;RXm܊Nӹ8߰!`{p`+-@%E&?f+$j99,.1=T?;m{OZ| )ϵZjtme,/ڕծ]&Ύ 9>yPЂ16@j'Lɥ6YBQ(-?D(JkE M3so] 5TO7_1Qn$䚳pӐd}jkM*ƹ`NV8kj=GҲDUFp4?e&h;fsH%N TZ *Lr=|HThQ Q.&&Œ >AUDbd_I$ !Sh'U T|;0sAVoxϋYRK,6蚺*x޶0sc QY/*hYYNja?]w4+ qH*-chQ г[@ݔQɔlbtп!TO}տW' m-[Yga d]LA:SDtxxKt%j O]Rui2 .] yw5vn[='Zʖ] cq-&6ƢH&)&CE E.JOͮ͢/>:*h#@ͩ9R*ȚU 1JLF;Iѯf#t:B}unNVwӴ7~=}3o -{n4̶$"Xc@I0>)FfW%w獧2ԉ7 EcmF@aF >SU3xRmIBݚt9;:xq`usrfΙw\m˵~|S?}֗NT]-O/ZzS+֍tˢ]I*X:?\^$)c6@|El MORr@kt`zpݸǵ 6_(Q;Ϊwlؚ{{re[fBjXdWSJL@$۪C]#(ZV $h.Udf I({TRmʔa876??9Cb`''Tkb(I >/"ivf8jf}4&UcL.[hY)EtʮwS7lR|بQU`7KTPs\rf@`5rbTT^nhfΆB0mw *~XAĒV O٫WbP(WPueh&F|[~L!52ƥFNlR !d/9F+FAS]OzpZq'}}8*@&+y|ݮ[^;<;t|J:z}<[ z2+uqtz9~y|Wb/a &k I1r* :=07k1s6C1Pl"-gC>$@RU詅CEi)U0*KSSyqq:ks.=ltqA[Z9V8k!մry2QZ$c}!&T)zeڱWbTSUAYG( ^TO՘Ffxa炈Ƕ#bXqA{ Wd*I%6}v4Cu2 $JFOC Q0T-DpѶ3Yr!R)-F"`΃fyoꬸx:]8ל:mq\\p["8+ū+>iw+TA̵h\)bBBi֢JּS\pG| -K=]EN[͑6 ߟvF n ~`m ~\M%:]?UNrnJ> o;@9dzMT$L!֚/O8xǭ;neU":C--` 8coR-=BMcQqKo'P! rh8 &J_ )eK=:T &У gԸ.ٻKv~nu޺vrTy\5;' =8=^j2 `Wl {I`>_ OVzAC}v!M -=\9'L&NҦ^'5)1. 1eZJ۞^$}4W'}?{p5^ W{s@_epfp_(鬫?m ow^i/ooTP \z3.oޙƏ:#H]-w!PP՟>|{xpZd_{e?/~Af 6.ϜArN߃Wj|]iR&lLL+ߠYaS:RO&<8oP,>aQbeR,LPLF^(#('l+[+a!eq8l)6`W64AJ=9$|dƊ!lW p>ǩi^QBh/DT9C4.Y:̜Wnw$TG:fbHo-?S1'<a`]*ot \R}wt8GRMV҇gpB4q)25U zR3Ró+53+5+MhKbɡg`db% ʄZ"\Z$Ud+GKcҙb~)pVV])O?\mnڢzB77lT<aptTT%jD@CĪBX V_LGW\5cI2ġ2@`NÞK-WȍVĨXRKSIJaDd- ?~2tlL2 #DuJݳdeVydNSn>4B<ٶoS7wlw;Ž7?i>[Y%^PY[v݁%I D]96ɭuqᯋck KhZUr7th9.1ͫ;%l1v^y:'[r3?PN{!圷P/g בewyWf7G7k-qmNWs{w~!v0zsX.h,ܪĉp\PKAkK^IR HS1?)M)W]cr؊a^ ̧Hm1䖰wTxGDJNcR d 3âҌ;$$" (O ^w[y p0Qԯ^&`U?nƒD8x£}'֊aIb>=M? OakziA<ۄV/^cᅐZ@3 &hf@AQ ) BvnnhpJ V aiDl0cFi η;E=4fqo7xu:6.W.̊߫?8KB,(uO0Fa/[PtM<0^gfI EY q}$S_E?(b'Ue>gMSg zG d)@z;2i8G(0e}W  n.:mqkB*ΥCRu*F`[úiN69MD>d? dzSobl-c{|Z[֗J,88;PΤZMWH)9_Lm>qMNfTa4uQ5O5CT?Ww^泳Ul 1F.Ĝ89֖ڮGh|-wOB-#q[Gb|HmÐaofY^0erpYryarBQ x.d ߛCF\?n?4] 5CSŶt:z.2.,u>EN BK#wy7?lM͉X/G"t~qo@x5[9)DΛ!Opd#=tQ.e1;JJJkbgD(z$B-<gm}4w:٫ĝxOܹO>ʤ;ͪCc(&i;uiZ*; ƒ&>=pv/rRO^•fja)pp>#pr V?}ͽxNjTD.S "S6PF)lpUM9r˓^:%. $p!m/hALքpoM/ᴷ#jU{B`9fМ_̫W.o?[wQuޫ64[TX;d8mGUz\ $+nQX4'~?|P,eT;(bx #Zpg xpZJs y0 Ipda7JXHS:қCh<-N+.ƂwDðqc97+v4# 9n4Ob ;[[bTZ?&5hL/'}e}XeQ$&b|=h"E91SAJMD3̮I@/A{O2 %Ȥ -z̚j p< \T$p=Ԫ>D3zfԣbM%]RYo<̚zg&Ir~t6 [m_][do[|!RT>Lmӷޜw?J~_RߢׇOqs_Z`f@f~ ŒP+ ev_Ӿ.]WIH#0HH/80,%uIo ?t{u:/yWL_IiӮW N~G3ߪ<ң^ՌzcwFdB_}qco:<>] Qr3U1 ɸ?<=,SXxн)9/V1 XͲ^Lǹ_ ~<F`_Y̍LH:01߆{%BkϞz+DF&; 2&˨T1h|taqQ@g?&㮝 !NuX"ckG?7_k a{ZmeQ Of%{ҷ+/WFm/i^u%jY Og>rtB.Y!;)N+w\#&ܲ2b.V4셝>ʚ\=T<<.MnPpM߃$m9Ӕaa현:zjcEut\~sl8mv7ӆƓŪ] wLlec- T7tmv[״b5]h0Pݭ]ܚw~]k<;o7o+tָc̩ xqPz(ā.)>Igq$O%߿>ݓc: ^W-B]iZ@58J O:<34GvEt kaRiX10EieQ\S}>]GXErvvGoЀ ,W_]7Mgnvm >ZYK0 0[>jb wkD>%wwN 1RK٬%ZzDS9 #H(jtn 6R)֌RZgFviA k*b0kVN9$B))7j<[@, Ƹ#aRŔ Oƾ݁s;4tA҄ҙN{ϝ5DNe𱃘1At1i2}(LI߂Or9*W>U<27b^f'1_I瓰 o?Z.3FTfO8Ȕ&sɁ,%9{9s?QVn|x i~@`{$~t?[O?IOw3?[`˺aqe>R~yͩjkgjv~x|2V:(~puWHsJC7F^yz:>,NƟ@/O\)Xt/A76D5sؘ*(y}`// LS=PtSa9eN1:rIrS6gs^05)BDcr#6%;V߈c;]m[9H(򥖜N)bxЄF;)XcXaɸJ5tt0ƓqԘAϑLO\"ͼ-d(>AA!+۷!go,*Sᦃ*d'g9P%Ua;{V* Ħ߼LRc$Nj3i)dǟ-k|Qr9V+uqѕ'j*X+)nk>bf*lŻߑy]Ev٫˖>VxX.}- 5S*>7gh6DWPg3nrv8K|`.W9Y]ܭOnn<zh8)s6N'<)Uƨqg `3pĹ>sh$[}hzՍހFξ9 q(GvZq,yWfh:W#^g}ILtz8!Kg+L3x4ћ 7LQ >I.U\7,/&[ :86(7^5:H[k ~d#4Y9bƺww:YEzmqt&s&D_L̰-LLշgr!w59ן&{ʝɑKFgUjxpO_cc^%'MoZOhbaq7B_Q.]@rum%Q#*uȥꡨD-ۮhz:JrBKJ7|ۮ2Dm[ڛLީ'g ?$tf'A"󇢮BoJT)+%!Dx8`"WʇzUr"#$nDRm ތ\}ϾQ uu3*<@]Z j8~!ٓlũL[L^Jyr.lͯ _.blPPQFHwсr;Il2Cw-t,?yؽ<b /HS c/C.F Yͼ b//Q-ɮMβco9qrQg1p+Wv8釟Kkd  Ek5u>KJ9kf+0ػr3\$_'j|W>*p<9Pkt6MSccQ^+V|i.F=T淗2͉פ}Ez[}IK{TKr+41NU /@xD(K2ˌcGG&$x<'Nckqjz!Uxd!R&R/5"b1h#2&"sȹ>*5e-ȇ;]$ "FʜtCk-㴪֝R.NjqoeTq4juU{omߚqm{9o`ti0PF3Jm=YY\v% ,B ]ZhaKE ݞrJk$,3X "]G%`"F*o`OwG^AQ0!S&aJ(K)µkY 7r:ZYsm쫃DB?)qW5] HtKM\-˲Ԣ9Mg仺ѫqCL8.% %XPכ[$ 2b4`5 K z5$|'֊(Ӥa)߱:emMw5u$ϑ0̂d0k,R+ b[M%(QPSc4Avԯ PXxx48HEQB>Hx 1TV`3HCpѥF.:cprͤd'^gտe8vfԇ1z jJ\p%@Y) usXӛyXɏ8 Bޗ\T/烡r ?@a$OʅV}HD00h =4(0e}o_%P0! H ˥CRUT_ar8(\XOc6Y]U'鏃kqjOe{>nazxpvrzмTb4g)r%WÀz7a=gn=uCZ&a4i\%Lx4: %Z{%^glmbT7d$ú}>`iH 8WUzW6Z}(7K#`p!yKIAS~Js43Lտ.S%Co!F N?|J^o^_xu|W/`#F&h6 R8zݽk,uM5uw藊>ߗ*{uםTYy s=ҎKsm9VFgK2o:;z6VL\ЎĽ!Q] 6r,R$ 3JHfs820S.eѽ;QrtKkbfD(zzA i ;˰js/a;$,,eAy0K̸*gTγ;g ra䐤XHMLc1>"'^q<=ȹ̫pni|NM$FRX;Zp#RZ0 :1a \XF@;d')#)d8-f2e4tKGi HxJq$\@Þ簊&FaV.!^~ Մ҂x2M'ﶳzM/fSܠv;Cs߿țS}op^oӛɛE&.[UeU2:jDe1.t>FzrY-'{wސ_̊yj6C3dS-ג>$]]w3igpm 1Y$k;7h*]1XRQȮBZ*CWlH~.A= 0#Z7Λf.Ykyi;α.|ήiI& *dc['oO>bxgݼ~+>le?%[+ukߺkE&6{avOg}QFӫ#Optϣfq9G=o"s5 fzwzc;!{[^}ṫ<*jܩ{&^8NNtwMXCϰaݘ<8ߌ~9;x-xӭm^W3JĕJ 5׆ @ u~v}' 0#K+s,58A@]n60 Tn ^͜N:Ƃ"Eqq^ )1EPwzS:hzb)h,xDݨ}vPN<l[\7 Kdrvd)(g$2*xTro4I4BwS Ϡ2zbH12 (^J7(B*"6ѲTA҇*|jR%8RPU s,PKgg#gbI3vW^~3DOMnlew;jt<*{&X񐘒ips&Q|j"r##rEE:SF{몧<_ApɃ@R ϜF@ )n,N<[?|^EN=PϬ9^;ͨVE4vS+ *h$%> n?É,J(,A /_DX |%ȩ@~!* )HK5MS㟯Ə / s'Yˠܥ:\y#d…C̟CVқg9XeE*[Isfcd638g'oM vR@QKO\BF{pQ$/~_%J_o^_ݭzINnٔ]t2]͐E>ȣD/i}k>G9絶&$ͼHd:xS_?fecm; پ99Vz@rO=i`M]<$|y-O6skή'BH\&qp&&,+c&T7:i: t=twO9/'~%C Z <1Ie*Is@.j2Z MOڏ-l)~Ysd"Jx<:α-wN7Cuj)p1$?:HhI$F,8_H{Z<3E3$//qZ@>q0̿=2EP>zBM򑅐A*#B=͙5Ny4bHw")NQz}zSGjVs]xcN"nR:;Ua. ^W{RΝ~O2)ڠ4Uu\ ȀxQi+_VS$dSrOCTbOO3%QSttMQKz/~]ɲqEέl'7%^\&܏NJq09RA-] sLxfnA28%x&y|tzG >Lg1a\x)[?n޻ML"u&RW-g0E#Bg<2O|yFO|WRI%LDQϩN+ iD < DTCU!JM u4ǜ܂ (P\i[\){H ~Ԉ|~Ow-)qzcFiWpAxM"㞃BAn&@ Q2cwTFxK]4OS-G2CVo g goTL̾Aqi[~T5M*-@.k`7|S)VBS9L`s*yNuhW߼_FoG LtIE'OW_lߓ c#oϺuQ zqL*gkA< Axe[v{&@yWv.`x#yp 28ǽ1E˃h1'ҫc,Uȅ@uȅH©Azd$gSG'LIEx*,F~w jjCC2~g;fgU!10k |>:L3a(!K|J8^iHPjZQ.cpH;<b} X__(~>(ZmR<%($(Y\qΉ9&|J)xƀym8HeV 3|t($)/R4 iE !1"J\ y[ #g?:lJ2H fB=Vk3}5oѺ[,/zY/O7DaREQ, %!XE! E&)I|@M+ U'$/"م)0 a.X8q&B!pМ,PACRTQ9$Y`Τ!9R 1p93LGr=KBIǨ.r #@9 fˢ;5a~Y:Dd49-tZ0AJ0+4X =25$E#s/=6m H@xv$(6C4Ƃ) NV1N줕wM'e k⒳N'GnYP^7@ *$n&v8ZNxZNd-|A} F JeR&4ѴswHa >rƵ6gCNyDb5/zv.WE>]X;TUk}ڥi_&宨: X MDr];u.% P&D]Aę$+VHL(E?;8n% mR* vIl6iڊu;9spgFcYXQKiХGI>bUhcT{' b`Qa+qdH/]e>K=i:VؚWZ T~3j2cKm TJ9fϊ#B1vu%΂MAI5{Y)Uj]Az,'(KŞ;Hor [ZΩ |nQP'[ 0( {vPWĪs~,Y.Kj\1qW>_0we%xA48'YK7rL5YM_/_ Љ -r]p^ϰHibnOsY{[=_^#̞ԗ4ՋUTe:lo-g%(-ӅcdS E,LQS>o 3E~gLjuX!JB9'M9PPTmN'4RmSJhr<)>t/o&`ό%7 1tЋs[8#M"2BaNQ9 s5lq" }muA߭!uӍ @4_~9ck=J+ol_[=Maxx/oGH-^]{uGOKb1q\ ."8`Wj39S~*zJśPM2UͮPGhvuhj1Wîe:)EYQ:jc^}F2fĤńȱgî79ϧy $͍wc\˶}1J?0Zx.㵃(Sۈi|nv7l'hO֑Ŷwh[rT']:;^o`E 6PJ(d.NVaA7i #@&yj+*X; (_ZJr MN$(bVqY,T_Jm)5@Ѧ=Umk֡m+ [ s:Ҥ=Avog!+Cs6 EhA<6fJXRv*9IWUeՅ~Ï HLQ稪OU\I:1 @b]1s. Or6:l_ﭝ@c D]&4ުa&~6n(;ɿ/}>n^|z~,/3{=v!&kaH2(TVVFW՘v=EUdDsQ"AOI,6p %sdR;U 7{s7U Wؕ3 GeZQ+foxoOsMrTm_#*Y>b1C ::-PX-$E%4P&Q%(tYIBtɹ zYP[ؕ3q"D/`,RvM9IV{q@D2PJ JDZ)$Xt._6Clē6@6db{|oruy\C쭕E8q)-dNr6"J-[A@I+X%eS k5%'UJ\ {k;抏^xc(G@z$Wt1F+_8rb=M_SP"ūD/uB.1acņ ][ӵDo"z2FnZHݰ;jzǝ#wI#l)F ?GmΤ F(T&V1QK3?K7u1k|\{w|rJ,$ZAX}Յ^56I9F_RW}Yr(?zMlFz],*zq[899O*ēJRpͯ/|G'W.JĀ ) Ku[|5^j~W?_l?O|oiVm}^K?=[f~x]O/+\|fL]Fkm_ [_3e*`>wx,z0`V#|>{~4_8yGe^ ݡoǧRЛ<i{}dḁ(t/T'ہQ%)XkעmNg5FhU,:Z ]E3;/͏,j%9>ܪR{Y^ehqYpՄ/Z}RQY.\˅ZXCIt胳&[頜Qܨ971D\>GCV,*㌣c6)[S\5TRCGRw7=l5-ޭihj1 T :7^L'EC!sLE*Zz!TPʌ.6ry0+ɣB1=@O] FWvi|S.NB*Y1qH$g5&ʣPcXNl'D kmزBuV/ 0`d5LHIMRmejGbOE6wWZ{ZU>5qs4IH/ZƩJj[+s$+z[tz9)!K!|11L%JѺ)D$6)hK ce_'R:Czi,eiv`$ZW}C6WKr 5jFxNIbѥ`D)Ȃ64Jt)*k_cP!% >{,.` `ҖBvP=5P$}m.5ˎ0H%GPǂ ָHs "@zU`Qn< p-Ρ]G68U& 'M&jɭZR ULl^vNRB"_ {[`V97&>rn=қsZp^i=+ՂQe nCja$-7 .FiSt4كҕjIJ%UA 0 CMZe& :#/3* <@"5T^U0 6!+*]OA$>y4_V 5ȤuՅvTn>-X:?)j`J4TU;+Q8mP]֊ANE"h}n5]@,M$C.2The1+xm e,!{إ)G;~R{ЋIuD!>%:.h 9ttIճC׀R"T2[YJNZ%l-ѧEk@.Ii:B 9^AB ;!3ڄ`ctGJ߳< Œ@ʀG₞k?,BL  35)ٕ@?A j@!oqVUhV%X(&a!dE De#@ ȉ6łkn+ -liPF+ h2+҆jꭊ*>,e4JX1lP_j:hIUT&ŐZ*m&ǃܝ<6'K]VC(Havu\=:R&j ` ֣;V$L'Kic+q0m ΝDe ͚w)DzIId5AI;A(_o9)͈=Xeg8n𠗨-ʡmM1WTs^ 7"Fh<^O:(QJj]T*22CtQ%fTi SSC꛶ƮXBv#V"⤩Mk:)PF%҇dyE ah`R7J,*AjQ$Fb1; ;^ցN,+~(]ڈJ5(ZS{N낑gjV圴&EL3i^&@&ciP Z،T-E~\뜼oY%ϠB]?C^j-z;k*C-6 -<qVp*@6m8EOzʤ轐ؼ?n D*St%bIPrmh⊞4 p@ A.zul) lH!b!C1 `VRpI4]Xzꢀ%a $Ti`/OH]gF@ޙ"B2@Mi.z*n^.۩a"Cέ y`W'i ? N_)}*ƁdHdf_bL3qٷ-N-K9h4|w--f[zyx31׶Xxtz75t^YH_ 'O#H iZr+:z:y }@di> }@b> }@b> }@b> }@b> }@b> }@diEl_FZ}4> }@R =G=b> }@b> }@b> }@b> }@b> }@b> =W$]  x<> _6QZ>gwuZ}@b> }@b> }@b> }@b> }@b> }@bL}@Z8/(`Dzq,> 6<(}@J!> }@b> }@b> }@b> }@b> }@b> }@z?^2{ウ]o&׷é.v,/X@<&ے-!\ƶPܓ-!J= ҇]׾]MViQMYR[ 墾DCmK&YBf/Я_\ Xӯ Գ:jXJ,fX_/#p_z=:+CkƨBMe? |5[:JU.Cb3F~"Voey誩hQJ t8<f2MZU G>]5Zmr>({USy@)Ym#=#r˶7C&MLqX)zt`T-<BT^npʺ_fBvlwy7s#6-~r:8Sg&ُ?|bZы7r;UʗZ9g߶QQuDBrf>ԅt!{Bj\Q>S54GWӇ+yp ic+uh :\!(,3+C >~pEZy,pEѺ'WsW@#+ &!Ph&!PvEZኢ|jϮr6/2z}9G *⼬]ޝ8-Nb:K%95eD@Ӌq;Si?i)rECcIe3w]~tu,0- SBӞzhk(.*V&VjՅ(0 :>oݲnԟ\jlo8a꧷+zz9]wb~'|~ϝ?)ؾ`wh3⻹>njd'7zow˿ayzrbh-hXjN܍՜ut֨b EkJV7M?㭠~>r8/]=`HB!rێ>]#-vcUy"*p槗6t{M|glr=[|5PM?\7m8A'V@V'*7vR)W񕮖pǰsVbu׻]'i]̷폑Q.ʽrQv,./9ݤƝ ~D'O[C KHOb)wA1YW՟v*'/ֻC M}u1DSh))EFejNYI.n'7!1zn8e]8D*/o>gE ,B!BǏ WA|blvԲ]:o4~9rqC]\z頁=' j`/--7;V]IQ6c5aLN5$:uDm2m]>_+~ۣlOoom>oyݠ|n@;kQ|PX/gF`~۱͏WaP3h 7o# y}ٻ6r\+/ kœYdcNʖ#-K-Sv%՝J IhCnX31zj ̳pҿg^_n~C-ק{ׇ=|?3|@xm/P f(|Sb3mTլy0K5:S%1tvvdMl yzL&>VgAyǩ!q0VCr;R Ak9F_ćriRb M ZhB c'K<4Y8bdNe!S|QÓNv]fO◫/!&TcBa":l %@&A>ϙ|yߡyⷨnN 9f&$˜u[,+oY+qWUt Ms9x~39'MN*% bs16;=16Z(ԖlA$2}їJ ZˤQRLЊاГ](\K?M*Jc1%[Y"1hD5\*lLv. * Pb`9l9gw 渑խ}'JeW^އmyE㷼|;==)r m"g I Z eHR4,jDuZ #`AbԦFlrpkXGf&TqaV~'p{PK>.k`pRYwG!hbg5{GhV}ZY\` m#N559+AV!#)Y1'HI5 j“tOӟ\q}ΠB[1A@[ u5$}\+-cHสן.6b>V"6q]֩}5f":vu_cvBYȱ~RRe|n{F_)jbn8c_|w C#+&A?]m@Bi)߻Zrw\(e$z)ʺ a)6ybHx9UymqJԩSbkYJQ逸(SrZ~RXr3j4cK"FC56@ B KCA߼Hv%?> )Y:g8P2 PTr-HVYOG-g~h:d+O3R(O:JLJݡemI8(f($􉩺 圈 Fd_I¬#cfm5"N;ȰSY}SWy_eNB/o& NqIQ]Z5h*͋Y.+60vyUh%*AtȬ'!D֤i\\s֧䱡 iC_WOpbvrO= ~'3Kbs!"ቹU83c0 Gٖ M+ 'dS1'۰iMP 3vf!TR!eA59P[2ao5VNfTfA[ilTnXe3Ħj-֬t8r R`$3UoݰFvqyN#n4x<J6.?M?dԜ3_N.9rwv0GÂW_ fgk3׸/0cj\ʸaCTGӂG_}~o0ӓyQ a͂XeuAj {9RM[_Jԇ+3oϣo=gݜw=lg\z;*'\!ƵN88b[]db%jDmX`S]zgAb,X*{mX0۷)u VbPdh;V5ij8:%8jHWD$bM4Zjrv_q.q}D'#9 uIXYXIwVld,Ѡ:)plZcbR/5sgFF|Efed]ѽÖG_uѸx5:לrìиK\\KEA,A]Y\ %JK}B Hl @'DiUV&UkY1\080nV| ʏ!ozu]?Ƚ4g >5n^)X>QT?Rch^]Ki=i{P&r`̋60hL]n:\)aAsX1 ,WZH]1qdƊciYa4ș$R>ٲXӤVD,,!'qB]\Eүh9ʤ#G)P}^%cC5eSy_[GdxJUPb!5IM+c!ubbB ^Zm^:nC뾭b0*ƺaբ{듴>dJS#jU&*9ty`c 3 1FV$]ՠljRVIKxzeq/9%T`}mɠ%:`bޝ->;~_t밡Ϸ=30]_^Pɒ^25WȔS",Ř5:6[snlۏ/x:myWqX;iamǹNZO%i~~_|O 襾 |~_\Yÿ1ooiɫ8?p|7sٖsݾ{,qxOG]la gΠGwpΠ6CXfc-V@WTz_R_n/Rnfޗ ()MG8QCV\ ZPN-==@f WBZ uN NKR `V+2e*Td^"yXπX=G7>hs`O6^!R!FYnb@r&pHCʦ䚪RQ 7!h gbZ66hʀ;ö،\)I͍ gO0?gGzOWj׻;PKzo}^ @!1&~t3;zPx~6}W%7ҔMij1Rt,5c3o5ۮ95*5*YGolr܂͊5CSEaRs%5՜E\c/ 4[}J{Pj]UB_-QDqr,s0ݹ^\uՇ=^b]?eOos4|K/ f\Nv> kZB!}uHW$dXѩHDoQ$Yfy/5 HgguSUrraǥRy.;!v~EUn;{٘2ύ)dus)fD.%mM!DfyױO حׁmW:iu^^=,gotC6N|5S2u:ջ]Wz5]?^繖C4,~D;*g~: س+K{~f$yz4MW xۺtCZݓޗ%Q24&;X6Z*ml*y4IŽ_]t {^W169{4wKс^ މ`ٽLm,8 ?}nRy,wu؏/eQ,'>Kjrɸ*M\{*;|jhx o8jzmz<]gm͂GKPy7Gۇ).#+;oUOަFO2%.|g<F.gGo^e18RiK)Lf~TkR9ؿkXS`.g^v76"(Tb9 入47Ά#w6uHoM}MZDXf[IQ(f e1W + `sxjvb 0\m6rYd؛(SNa%$08T̺юa@')ԙXl4%gCeKO |{F}I;#وO3D{m]ckCZ8mμ-(+JZ8jwmвhlPq+αB 8+Q\ 2KF*ygN6[./e^Ol˅.[Ug"crA B *.)YB)p?A.őig<:P,)9 I2A* J3n1}#xAyͩ6cY`O|ϗ MYW9 8h(IkZ1,)(G#R;r7ô?NMN~YeIN3#iq/Q=+ଘt~PĎK9gߘ}@D0°"pf4gEh x:+ l,B>Ѣ"u*F஛ºul;N88O}:Yf`>''5"EڶBlp0QeGc] Yijp+Epq/Mm=q ׇgbTa80uV655~)LOƧG53 iTv8דqyo];d-3p$Ưl0d0/fuOC*~^lYgdrB|zF]4W <(:xGϿC|Y~zmTM;M"~N(uKqzyϽ*JۆdЉP<=+QxVtFHTz*LUIPN+g38&d/?=o޵1Q^>{o/cocE$h~g~ ;pe^>4M CSV4:jW&`]W#i\{(r_@8X>H/^hR @ u$@X[-;C٠Ȳ@ 1<9dgv:CjpwkC=PR¹&H$*(@\!8Aޱ]`vtyQ;wYηQhs6Ej(n;)hn;i&ypam(/\2RR*=͔JkwIHߡ:T?3V'Z{H>U=f<HE&h,w mKuauSnQ\%Wɕ'c T ʢu妒NJk 7c< {/knK݊ϽSɌ FHK92Bٔyy0p8Vֻ輥;DDg:;`#H"p!VKYxeB@D"hnnN,Pac҇Z1e DQI hZR ]/)R 6]ɪ1rn^MBA9X|=]Ti)Lw`>M$т)KhL"VG xp|iw<[CP~v"XƁkai-P flf NS<"yA(ۍR]GFB'@##PB (8R᠂ C圞6o`pugĬE<фPԴf)n&^: KMxd.AR[W F1D|ő1DÀB3')f ½7A`0^ui({o{in)RD`QK3gcS #Q4z oZ?*XbO{514ufq֟?_A<&O>uzlA3,xj6_)F7޶+ڬ?H, _vj:Aiw-,>pFyЭzsrZ@p`a~/qBl3Haܷn>>?HR;t|\w[>-mu7#EӉ~+&QtS?_M]dɤO f92$/yPR*eR4 zN a~YtZ'>|x~#.t;2b5[V;K.۾ˮ=0Uɀ?|]8:-e"(ݩ,(:͹|_^\}.9ر,+!Ђ8{RzW.G !lpǎzby%Ezܘ)9JY[22E;iW}I|9xBPKIkj}^%:F^l/*X44ӟ@!ldR.[_%Б^V ^RU4/YLBλPbLf{$jx$Uj RY|M;6n5zwè<&U1rwK?]4k~3D"x,:ZtoN_wK8XM]ǡUӋr].tYb<Yr%(K%t4&q{%G| b'iwv䛔ݱJ``ڗX?s78n^ek=npJGܰøAeiC7r\FWNu]9eCW;ԕR<پcFWK]9Hrvz]e֑EZZ(CWѕ ېOe]F&i+qe5+i6;hVRZ[;hR\W=t(PGr`nt/dXIkܺȡ*YP)rRpsi+lQGW\NFVh|i $〖hY $YGV>~:%cYRyRśXHJJ綗(7&Miځ;JsKiк4g=M";pnt%VB&tڨ)qkC zѕZI0( \8ґlp^ߵtbiEDzjY#]?Gѕԋ@rJCWߌUoCj'Zu26ZEiBWS>d9u+ qzѕغCW;U ZT;U 9l}J\^t9+,tj" U9t+Q7r\^t|t]QW*trZgG+W9Y3T?-.:vts&+ӅVؓǙ-Sxr G-r ?G4!ބ|G.mɈ9=eE9& Q&dr+̓;VaZE-:2g$ X{ѕW!8CW;ԕF, ϱZnz:Zj~)'ߣrl=EWϓZݬtZuzܣ̧zZ `܍7u i)X4G] R?;hIvgcUm9oU7:ܸuђuT{ZtUtBuzѕntZ*JڡyR뀹t+UEWNճVQRc z]a8 Xs?ɠn}*u]9e + yR&%`>v}ԊT>%M ~@KL8Ig\^nfJl l9Mഉ[v7kڑlѕJ7rZK %CW;ԕrQIWѕnt^WNǬu% ++3Rzѕnu@:rܢߣLrtH(r~t崥LEG])#]u+,tڒ[UftEV=M#Wgz:\8ZG[6 Z[QD+tEZQE8spڍwͻhE[וSxjJj#]8u+9h^b:tC]В{9.\M㱊sLV5^mc0'&[?RhUWptnXS.Y!Ɓh||Jd4NzFzd4h>. E(7AtC]e+pG0I?+<W]95?cu[;ҕoU1vdid)HR3r\&-кX /z ^L\jVgQv0zjէcNTѕJEWNku]9eɇv+ :cܵtF }וS&:tG]Ĥ9/1ǞgY j{);Us-׾7u68p'.nd9† CΩ#]ظ9dǕn搝67+xjH&ԑ\w+Ǎ]9-5?)-]*9dlq+RJ)+= DW}-Ei]WpܡߣJJWlܡJ7SNkͯqѕ\Xrp:*;Xut6:e%[[CWOH4w+3ʼ+di[(O#CWU"~޹ՕЍBEWNKu]9]RWD97 U ˮ+ͥ]ԺurzxV4r2ֺr9CrΠ"cgP Yrld7G%xԊѧh ~?׉6Jg7]^n'hdr**sZi~)[;%fK=pI̓;.u3Ehu/hK_BTЙQKJњb.2jaL2 -<2SVW |EG4f._gtqhTBQv6dɔoehMiB8  Y͆qFEW iY,/cYr(P)ؼx0 hq /\H (߼E1B(2j̐uX UԆ˨3l >y\gr7"DUca4ձK\15NPBrKCM!+t%bX,\bD۩"c>a _qSBϧ YPdrkFX CJ$DD~yb2`diNRrL:LATgaiHHak),AkIdPa[cyȊVNgaB\^0$0Hi2 q]`c 5c]D4yDhU ka򚰜-^^(̈>YbtvJ$q (WI塭:ě$.@q\TѴѷ@E,:}zUxąK 0Ų(zʜ sx^F#+Ȅ9 $cPnUӴ C:C46i=yB5JE\,~/X,_vuj2.2Q5F, T fCiŒn:kU+9kBVYDdk\h@8 `MC%9,l$Cit݈2!BUH"!zCe8L?!1$V Db K$(Z .()q8o6.i@: )CxAfD'jeB Vf.!1πy<8dRh΄'YpVuG#%*£z2Of<و4sQb.1(c; M!U, yR H/Eh!(SDr]逡8`g%Dh(U.%#ڏ8iˀن]s!$/\5s jgCDAA E隰aFYJZƞa8dsuD֊8bMu@f)_h[@x03f  N3 E\FНbD"'"#0Is^~y~~§5_7/}Y)> >F 4ףgeϻ`%[DlXГPsE0>+T(^:Y JVCFH7Vgam ));ŘJFWE =]2҃y*Bi0"Ę61F4bb3t<,gu==30kREj[c.6/,|HԌ~ 3\V$̜2Ditcx6-k3 IDP骅 #I`d*mtnLvMKQ#b1j]p[CYkpyHY5X`4ƸI_ 0 |ျaQ^"o* 9pyX%\D[9DKh,wzTaH>"Kk=xP@zk`}fFu&iD rV$bʅs <@b94R&xyƴ2apaBlO(h(cx)U,JG52⓪`aY:@!Eep?{ ڰ(RVGϊSZ]FʄڏH 5h*\#|r\̤XH#\5%i\4 Z`;k$Gv\m>ڃ7FPٗ9w߃ Xk0m@2 ]`93#0²B3 _XD{.0"",ٶ'rO&3tQ` %[[p+7^0@\0 ks !$bU.u_v)&d,¬l݄/D.P,9tR)`$di@ui,?ujOW$<Neg,"B5 P GUW+qZgR8-}m *kycIpm\\7\Z'\Js`d!\ Wzn VBMj>v˝Yf塑A W_%\ +GMc WZ~Y) '۫|1y~4Qm9N.9)qw9 tM?|jrG`݌]e\2nn68Mm};ߍo#؋zȼro_3A7/~PbIwc!LY]ըRI]k<~"cٻŬ],oq6/_+w8^m_'u4sv{wH]>呟]jZw\5vyWԭRJϋ>\kZUٔi7靕5U}i3~32ܛgnmt<"^]u:=Lv/KI n2-Ƃ#疨 Hq؇|:+8˹y1}=nO[z~~}}եKp|ICr}y~jT w?۞}SXZ]8;筇qQw~ӆd9|}q+;Lk~k}Xy{lǻKdoU .4ralrvpg `!70wy+Zgm'mA-XK׺`RNG?lx,1gB9^wb otEy*ϹTJnbI?8y/RAv,oUruB%Dqv·ڪCeJ¯;OgXl8 mA?>az; \?-[{h6fH-j)'odUq/MQ&]J-AfFO.1lQQ hd]?9aw-eWٳ D?U}6 @f,t=lfK:+'TBJw.ZuP4y'Em0+Žx΍{[m __jK5@i[3\x -k*Ep% >Z?Vb&dݻOgZc')d.J:ÙL!ׇ\A#aY[ϜIrgS\0um] b2VfjnY4FU%"!^2 ̜b찀 vjCU|WkcxIݾqJW+fcKN!lh]mbbd۷vAVOvWn؝`0%$a%~HFYw d;-+lL5TT{Si-Z%|Mgq!o:Ӿ/,l:˖$t93=^, /]Fx HuT+CU\.;YIX"(zCl@797`: mK6KƩCrCRٳ=~q㳛3v_e=o֧ 咢FdS\%V狼js)L13 m E @?̜.~rH%st˽$z\+ JdcLeﴪ{pn 0xZ,_b 6Bq[Kt@Ӌ?'ONj Jsno¥Z- Ûc\].*x:-A_@pj[@V*rp* yf)|uEkzw{3۞Gژ3m5jcnv⻷0-U>[P{t[t)zBSV.:HPKT uؕB&ڄ-_.#gfamBtŜ*Uu+gڕ*hC8e@79hg1'do_OO.:vu9Y͗v Hv>_}u{]ZkHUF,M [[UOK>kԵz bV[ej!6%_f@._´2gY2[⮏]m=rDG)돦ɥpo!ެT:%=NIʷw{Ն6VS(*KS@DN\O]gՀ,Ñ wFa^ba4)bR3&D"̹:էo`::GC)^5ŷ3:-̮fqDZ)&A̻SEZC.wߊ-# Uwރ I$N:YFϲZ4&OdʈrJ jńx#)"+c]L#Y8`gfAfde;@c1/R7>OZ 6i ɽ.7YSAќߖɨ))<-K"J Vp;"t޵#Eȗ6ŗl &wAXYJr+~m+2mIg&jUU}&z+Wml|2*7uku(Ɋ<}􎓔. )E+JءR RBgzcM wT[!r{4h?\*qEH>,(|Z Qtg2'=ީaBe[ nٻ¶7eԍjǟ'>6RE:-!'a(n]PĕdEI-!VuI6'ӁkbRJ]&/"|JZ \v=zތZ v,MӢ6-(VUTr޷x>=? DIצ[W,Nkrzwts'NfZU*o`f/9FH9q% ](x(ϛ*51Jubmm9$zR fG'97 5%WIHՖȹ[2* uVƶ*BkJw/nAwg8ͺ@Gh8[q52oCˀ+G,s4dLθ -F[BFK$Xݔ 4 E(&(bS$h@#\[҉%p:ǔY4vZ-dT]VǮHmv`ou܊$efS\utZH <bN,}VxR'MQ2;(!YE-H#(BOZ}\aԯ+V"Q^H 㣧u͸fy4ρh4H([U":ӆP'E6\ F&#iC!N+\ %@rq,klVɶrr)*beFٔ\DBJ0#2s)&RC]VǖGycu أ<{F;; IqFѺ8n{?>#ڷ[ m3DDߜE?Vw^s)?M nlDhU@j9욿V#@[R1%AO"%ٵ!ep2LƔpڧX|On^8{x.r<лA|e /ޙ,6Ez)wȩqS ]ϵKouIX҅5 &}l>f'} u:U(~xV̺xu5)te ɴc5J} (V@.YcmTMРD(V(q"K6 ݻgՔt R%'VYDeTŨyYPpK<ʥ"VP掠G-Q<ːL 18Š̢1[Nn0 l^ 5CU)܍Jl[o⑏%ZV ;J- ؈gǹA_, ݵ9Б1h:%qcdJk mvet ;L_2a91 :fY6xc=ʨ$&^Tf<[34ݡt`ۤܓr6!訴 Bxғ$R0!%3phqi Zǻ=VL_Zta]ڶofg1 0FXaZXg9xց qD*ip}wǝ k]PiQiUn[ɣ6J͍Y"*P9IZ;t$u0Kcå,8[cmM06ޠ6β$<y şMW,u@o;M  Jy1L~L<50r?|N#\X?$Ҡ&ԞϬT==nI(&84{'l=2,lNx/\jB>z> qTæ0My?ҧӃŴF[o;oNG /ܚcy8ΣᇣJw1#diiw/Ζ@qW3XbOZD#XF|5г6-Vw*:jW_狾Ίa;x4h4YL򠈍}ƗJ~i q-J~Wca8%*|CځB,R1 +xQ^u8+Z%UUM+.Njۃ8<9"& _ݫW?;ׇ޼}/4/i"{}pރ/_6oZMS{ U{ ڕf;j1ĺx}9H7>h^2plc9>U4U:XGH`)&eb L am!K%#]b(6D:cVe*ŽDYch]@:#s"fÂ5SFh6% C,!b.`N3*SmM|#%|9yuxzq۱ X픯W<\x`l- Oإݮϗ/^9呗/ҕ7~4P /x2=J'턔T G.<Vdz܏J},͉/ZaeAHjg&x~./>@/t ?M.-M")o%!ݻC^u~v^ zXӽ: ʆR=X`y6yjVw`ht:-50x2Rytq<$H\Wֱ{Zs֤L'Ŏ`c-8x/ p|~J~ő e@p%$]=lC) JQȹ16xo V:VHщ`R(<3UuȲCEfAGQ{qYƔ,mwԟ^2N;Tk޸yݏ}gfhp:n7r85gիt;?FG6t !bm>R! J4548;J+L`p4ȭ^绦~17O< vyp]5\p~6vu5X=ՐgF@zA]g#˥mJ[R=xHE|!͛DU4Y# x1793F`zm[VWzu'ʺ} t]l{DHc 5~D㒦 Bg[>ԠPB&sԢ--tF`v^ VǂK&*ʀIR$Q:eBI^2p!`%w,B0Wm̢K5r> .]͌Vcd%x=J(o͛`} 8:9vfp]G'͂Q&oD4x p7.X~#550R{PnW[V*ܥ@C l'*yi%FĬ#gb* 6do:{lH\h $3EDuqa@eHYrj6,a [pZx[EFږ>3W!V#gCoi9N>@d{ν2~Ze]au]- 3=ﱡOyĢFZMJbZḅ $RDhR"3IzJ.eq' zC!(lNC$j鉩rdv\O-rq7d7-}%}N(.ݭWi(#&'vTAsډ% XM0J7J&4J{q4G8`|W4-iy<=b2)Ex l'VM"*c Ky 2mM꘰il_~=+8~Tf,$aҼD(S*s>KTnTT>ʆSOzl^)B!UN77$vY%̮` (*-e&Qڰ5HM=FC1$g4T.W;w6^7n:uw:8ͧ*w5yabUѐO V;Aْoo \͚fx ŵ"fjEY6XktqmcwBqV!$#wh,GxwwWitu=džݣY5wxg͛kxo|y/rmrλ偶rַze7y$͂~/i= /v".~w'pw"Ds孛LӶ6'$2kg_)v-a覴i,{{)ӬeV(sa*[~6 0ZA6v&\;.AHDw) '2ηntZj~sߋmDRLTLDf.H`{o~azr\;Vj vL! @2꥓WE+8t8ͥʄA;9gE:Q\|:򾻽&Utn[zzX,FJ@t?3ЩNP,W,LJT ŭ=#HocAA/E9sT03&{ T JP"3LNe%nnPReTS+--%F-]MKQf34y[V^]b!<[CSaS;0#"7Ú>T!m:,ޣtuaYR9| d"+ KRz\"(vg<_^hRO<e U^y iH6Hy$c #3TB<[aR3NrvAOA7ٞ#U[ "S IR1SXWjdas c,/miJ5s`߇'Ub͚T؆@}Z)E۶ܛܒ fxas߯*U)Y0LbNJr|a3AAa$1B̟4K[^+OGK|K*Mec8eE+`ɜ3?'g'ɋ*pbJB? y珓m8&;F>L?g7tX}C6>!ٳwQ1^I#f.g9NJh.:ԟ_h< 'WTrs_'4$8$"e%KEcҏކiH%j"9ɟfl.S<"LZZc`,|x>HVr^ߓК#篹wh:Y#ZpvO$Iʄ8I6H?i\$I>]~HtT9BV+G(P?k2V3< ֌V'Q-DS?: hVSZUZ?A{v61ЫaN?,"]~^)٫fJ~Ӌz#Z $yԆ-<#U; U;r3.e2UVw+ț%ı˵Olx ",;eQw.MREX}W" wЁ*S {Owo9Wڎ{; Bō;LAf6AUcxO]HY-&Lydrg6+llw@ ( 6NA{Z [s)EQ+ Io+G˳S&Jt54Woza*\ %GP+ZHuY=yuJyZGfp3گNi.c/_8~>?3-Ϗ^'f?_8C~<8vởg(O}82q(s\2MKS|w#EWze_ .g6{4^Up-~@?` m+tqh/O =^[it>RqtcկAn|p|rWIu3f*.qw_GWgus5-jqXYBLZ9;SoGDW>XК+ka.""FGCWhBPjҕsBWafu%>2N9f\]Wj[jW}U;B쇮ڡ-2-tЩ΀VRFCWD,tEh:]f$j\Dt%hh+BkU Q=]"]i%꿘xjړY6*+I$/F1T3,FFbI1F|/ߥ3)At>-D~AcGbQⱃUZ24𞯫 HUB@>S)޴QK+ URdEJf N@#`ޅ8 Ʃn;3eGPܥV˥g;}~)v)^JJ)2SeJYYx`5A?ۘ,2UO2\!*{ ,W*BAY2F*}\^hi <:$/_g=8N'R SwLb__'q "!gswT;6ar 1ɤ_}ƴOqAFc\kcl%w8bF 05<"<])J{:@Ҝ3v.BXBPꞮ *"Bc \R랮,<*  1p%XЊ+'19M`++1-;MJχSoP4/=xCkut%t,ƶ+C k;AcECWFCWsu"]KVՓЕq]CWsu"tutk֛ ?k9eKBZW+~+-'ƴÀmQ9 =Qy#o5j E5Y_qX! K(tMP/5y4azFXoW۵նŶXkw.JhaO)OJ\F`Ǣa6mJumP6%zmp5wֻݩ={T&ptN :B@;Aqk67Hc!]L4zut]spz5DDWGCWWD)Mh=] ]iP؈ DW7"34BzC+ꘌAD<\ ]kWQá+ ..JPBE]Z+B;T$]9ԫ WABpm %{W] 3y7[V|ߩVzt-)\Jձ``ׂ\OWz,6"B;ϵE"BWV0u"Ftut#C+ڸh &b+BPPBOWHWR1?:cy4Ŷ܆.UTKz,~HLF-`h WC,upk -hjy*ըӿa#j]_)1ʢ-89h\Ue82.o~\TI,GJ\cQ% ]W%e3{J*!&/츍{B C+m1iWG]\F5ߨ#C+c@vdGBWwiPv-oOWOBW֢'קj0 ]!Z\ZX4ut*&p_'h+B;OW4 ]Kv̜w]%\_vh%%Jޭt mNW=]=d9@fZGCWױX ъ=eiTOWHWYTDtE`+k!JP"0uq)z:@ZcW Ճ7zU_Zs.s&u̹[0G7 `+ٕ!23Ђ@(Pk60u4tEp֝+BiuOWHW9b+옋-t~PꞮ xμ`."z<]J{:@4VFDWsp׮ϼwJtDG'"b+Bt Q3+c&v0e3]kڡ{nc%+C R:"w=] ] ɝ; ]8BWv_[P62"tu8t%BGDWX>Ajm…=דjVԽ1xt7uF2KvkVvF@b)rfg6-fY~FX3H.Uz g i a$: koiwoՒ#&c=k],Uކ|%G(Zr `*""ϱU=U "H蜔i3aii3%U>#S4:!f:G(H]g,k9w&J)Ad" %huPI J#hS_y"؉#  \- [iRʓ#`ҡi w';M[ 2M MmX+%= L,Z=]!J[_LW/BWR&!BNy.p5.!o(tut6g+ X]Y*]ֱӕEiL+`YVh!7OmWقir wT-փ/^(h\@e$ XOl!dd(6l l7+pR;+fHo%Zoљ^;]F?ziǝk.[5EUU6Q1fo ku1&6 SS[=X@x4>n/ݴpZԣ}3wŻOJl] ҽwMb:.헹nbm¾{e^frl'E)nEg˺oçZ9D8GSp9;uC^=31,s}{ڙ/} Ɠr7*-Ojr|~-%lz CKJĸ}崺|HzBSI gE5̦5@{SS_t<G~NRk4C7a>պOog+NEYU꯻)o̅}TJ&VZԕ%b;~}H|~C S }b2JMm9S³H읫5Oxg(o}M}n+aè;-?Ge ]%T41qk Y 3mq(Xwn 1{QM(%cFqM[ztL9 w.7$P%!f%ux J--a銡mL2#wiJ7E'퉿:f3Pq;,oZI)DULgM.^mdk=q۬!iYgR ҅Z]f%cD]?EY&J nbklI;`6kms7A z1?C2 3&g2ƀ[R~-Y@Jw4'xɛ ҙ :ٜٜMS7ϫ)M.54Ϩ*KIkt5]knf_p:hM37Y.1_]ty\Jcko'`LA$!La$Ɂh6,|/~[5!GO_}N E܅l*2VyT+(.vY-˫Gn{xdֻ;V)woE.Yj*ѽ wPGnPkaت=*2؞q3pNV:q;_iz  O.\ReRV3UV#&rJ׀n̜|~]O;8lL_w ! Q-/1m-/#L*#lHs6GkUZzfoU1ZZ+-`Bm`U\LNª'FR/PLَO}r KP%GA0e@򗅰F8 )ںj YiS9G9ȬTXs(\ܢsdrࣙw18oT\kNYz@x: 'ǎn)6\By +w+2/ϟ3x?;۬잌S9^yUNK[N~Us|okХdJ Z׼DV@hR5J˲iHt KAC+~;C쵡qКj[L*"QYJ6J([VS^Ku3fh"hh^%!f'{HjJ `l)h}T dZ?Z,0čHewr1˞AZ<f?Y fe8/3=I MƘ2iPp3F.D 9 .Y?dDY,p-6h$H0Y(tN̛Tݩ*|zyH"S-%R EdbZ50cKQ T#jũ%aД(Ine̶D;}wI+/^Kq9i.E}1j۞ 4;g_0zˏusgWU_ ]IZJ.(M0ho%|% gzXTD.=%Fy0pIPO.e"p)a\S*QEv0"),e"AqOO0\B3^WG2O3uSq K1* i2;_iNN2u(fɦJiVLF)“ۖ{*6"5"Tq꺪xhZ7mqn͠մxkx Z9MaweG^#\Qc[xokRzR jQR^ SRje11iE[YVԆlKM vc*^[Wu^G<|N7ޔAv,Rŗ7.n{d,L:dvm0ZQ@)bTkY;@NWe;vsH'WC%84~z60|eBQCMƝHH& LK%E0uE, IΞ%ҲٹI&GYI oKqƳ)г*FWMf(+ϊK1&5L0 ŦN./E7/z%zLLV\g~ge<%a5-QY mzS*1LN#PH"8xV:@ .HZdͦZi\e`44H֩xg#&}ϳ#j褦yd_^y7~y{,sr@mUfHaUaA2ʅ|m9](xA1_W'HXG{1|[ګFT5J`SQkTPL 聒9L- t3VY06; أ!`|OEi&~otƐ =!3,LarI <34/ 貇Y/ $S aZΩtR]J@vY<$y+̳f2Ք @`&TgMg ^Mu>kJ*$} -Or*>g6H>|VϺZ'2r_`D^|4|թ0;=ܤ teYj+sVB^SOf(zsfd`YɸA텿_!Vؼ%Ű67鸬@,Ķ{=4v ^=WJo~z}SJǵ2sv>{&9ghoh,-b{?og}4d:p/i9q? ~gFQrLj]@45L~c9._=SuF39 .g%=#:q7 K@ "}LD5U%U0:0$o4JrViA$+V$A3Rmf|xkm0gD /&Ixbˇ?&hV$MAZKzYcs|֧%8_@Ӽϊj9oFǬPœ/ D)N aHI2kzkFcܖ"YC$S1M;lȭ4?zg%=.FN}Go ׎G׽OT_;)zd]Q 3xISx'ٱFFx |2srqCRMޖ3G?^5ᛳVQ[W;]BeTQ5ފ4$؟ƿ$0TJx4| ٻF$Wz݇>cazEƠ/'QdVAa[\Q_Fd\AJQM.+Eɬ U2Z2SWʬ<^ZZZOIĄ L ;o֓=- a|i]Okdak]q*LB ;{ yNcA[?MdH~ ?rd҇xbtwCyӊz[T곧bGjGcc< >O1}fƑɦd3n=/xGmPG `PmϿ{d3/ToW+6/DrY:̯Vcfng鷫f/}gf| pܬ.71G,K_ !f$hC.|;Xͷ@7iy4wZ.moyf ={[oK\+|!R4^4?T(nHC|v\D`|MBjaU<%sy %ulv'.EMF^ֆ YZ62Ʊey ;ϻ :,Ywhmvh%ML~aqQ"H`vjP9s/X^ʞ6Շd%=Mۅٍm9@t !_Bl "1h2~eR?~-Ao(~],jvNOo꣛UL'A>(kEOMkL?TuS򂃊 PMP9aMDYEElՑT񒊿V/L% ^ky "ʵ:S4 ?sQf2@QZ)J V4^. Kcy2.'x Ȣj-uJOlUE?AV9SL(cMƽwm)JȠO˞YTdg3TVwN)гl\{"k҆5`:8NkxQ+#3~4T.Sԏ"ͥNNm=G'#^v}{?'ݷ =Ϥ{X Ԇ5=YLפk1أj/w/xskWbW5*Rh9(1XwW,|SdWvPAc_o:3B @wmRyey!Lݻ?ă <·u酱2cI 7}8gCؽp Uj0nK0+o[Os4ͥSQL MC<L`އ : I/H7|b,YfpO Ւ#/+<8JuFl}JJ) xin(O֋ GR8 E`yYeFyd)(HΌx{lY˃G؉^8E_\zP/YP y(2w J N5C%f_ԂeP%| L%7 7t)MNbzBATY:Y_6M20t-X]h \s1&$N$˳éZt"/WjxsTN&c) @;]6iJܶT%΍ HL ˲yhmk8tK>yM@Bb܏)'8/Rדi8\Ou,^pR?h͒@_x00q" Lu?OfE_?ȚSwug ZcՏ$'ewƛd}IJybrS_`7) Uy0Nf=LǽR/ bv UPrم{qKlXL4=7LoU[]K%/8/H&bML1fM f4Q6ne䭏T(i$5~ODZmun}Ar*5oYPнUp0z(Dbl "EPB v#"z2q1Y^1O/fH$A  BܬygbVG~a7n8!htzz^)S(@B1NA$N)0qlJGzE2jch"/i7Vl}R,\[K|E;ܷ]!Rѿ30_%`\#e--TWA4+}\^ `*JUR"Q#+Τ 7cL޼]kc7?uD^ pt`T>>bPՑ:/0Ib; z|Xd~"bEJZ]#awf BFs5q jюv\W ֊w%6 f(+h@cJ:ne/߀9(8fg4 aiE vBuR1D>3`*׼9ԈRa:CK8zmyW.WTB . ݩPZň~/ydDa0^k׻qEsɹK&U6RrsJSu ov8O(d4VB>)#^l71 *"J㎪\oɈ 9DL Ӕy$no0! H֗܋zi25Óbٱ]o/B }W LP7|1 B"u'ktD" ~FwX0$0Q~fFGXnu=ݐ H}ِ7 a:F2a;pVb@c@@IԔ .D}yL6hK:&)K8j@d%(l~ I$Vf[i?ۯky恪?}eEk;o q+snG/я3?úrOeN%_eQBy6](Tbl'`yXa۹0Hģ8` pxW?56 ƔzgaGg䨅[gٰƥylgXoq!}XOE,/F6Z vr @ JrY5a+H!/y.?h-3i0Ո[аͥ[R)05bu{NÍ΢'vZ_MF"5ѡW*UsPX RCi`YFXDSiy^^e TL yy8 ⴯ B(Ul(swxr? k\`#>)$ZO*JIn&"V9 3].œ-[Vt<˛x94xd)=;n9/0F{Ŏ&?n(Rs^FNVP/p~=CL4HiHW/yQD)rBhԗ!Z ]Vl?m Bö pAxӰi.GЖ+)Zİ8RaȁyMJKx`yw {"?wh,xR**too1D)E/6 e oSl;ܵ/#6Iu}xҁ_/a O$0fR}x*my}< 5`1D5LW3E$d0ENN=UBԮ^ % 1H6qtSAZmCo't!JFH0Lgc$(Z.D7(jVG+/ fHIsC,^]{I`|yf^. S+)/^ǘAd;w2B#ۦ2(H o8G>,mƛ͆ҸG*=ͽL7 Nj7ք=/!pbAsDki? X2֯*)~Y!Q8Πp Dm)n߆x_!v gӹox?Ɩad8M)-(4xOR3|ugK!HBV#e$ZBH I&%,r [rcZo]XQos=RV_>lS ]vmc6KX ^ `@:Vg*gƕfG GϺ1L3=>@n_:3jw6yჁ4$ dw*y)' ?}&m׹/#9FASl^ƙ:u3T_bH, sR@!2YH**+~K[TyN_~Z֖ MEMJ~Py&Whq $ cY kHvBU<Etڒ>z8d PHCny\QX#6Q[)LнG^d'W]g`HqH^Zȗ{')vk|>._FùjV 4|* oxwZ+ .7-v侥^>KW5J3a7*dgIiPB߮j.nZ9c5c(oWr(hڥ"扏FZp;޻RTM/i$o/gߕ{jtm%׻9 /؈|[yS{1hH(⟾Ea_s#Pnjoy}UK^Ӝfqy}VT/>t>e\(єT `S&OF(RtB -g6E5H"ya;| 1mre9ӛQqskF˕' L1JLdݱk嗤gm⮀㧥ҀHO"hҺH( )lc"\dr<[99.*ʻѴBcP~&6 JCxKa^MU2ݫi=mOxXCGI G/:twW%WemE"Mc)2RΎ~ $O4ƴ;WI( Vl]P"/?4a3e$SW)X\RHGv%R @ۚ"G"дdzS;&w2,_܌>+V$hm~N !?tm寧n83 Mٶ/ %q)r:R9seV"q+edq%)2qե~$ZPx̫4Rfеe*{*(TA&ef鴽=+;_rqU&rgbh3mBԽVj ) oAe'J tDF 6,M28tyˠQgR h3ܐErSr\iFަ}$N( .w -ZJ9%-n>DPRf_ p*Z6x™3g9%_@Sg@Ӎ,6L*2("A-M Mҭ4IGڜ>ކ}Ai۟kTaP]DҎ` H]C}$joh'~k7;duGG;29.)M2;=ڊT5P TQ=wԔ ЖWbZvDUY6fr:e8aHc+E_4Pe!snIM7IiZk@FU|Wy@QoZM諿˘*Hj^ʨZ7aF` jQ}16kqh`tecW!kArbP^NumF⓺#`('YA϶JjCPkzʤ@$-AaZC3ef}^$eOUo 49>MJrg"ʾݩ`xVv-grxj٤/|kz;b}gThwX,zLZ6 dqR;?VI$ v'Rt><4Q}iP{UCLӮf*ےƠ#+5JΩ|;۶qu"q 1ޒ{ї@)E)Y _o&~ZUDCqS=|I % {d܏'IJaf-*xq{0%"R y3TdFuw7;xχg?ô+lxZLbz}ᯚm :]!!x ʫX~GXT3N-ntT5 5O@4F{}RNeJN# KඁGV/ͬ)XP*&3@(ekי.Yy@[7<@(Zg0&|)tέMy/-k[}xhJe_p4.6ʩ;{5}Ͽ2YDd4tލ'Ҏ/ QVΜvLPg%gJX2n<.qĖBR($eKl<4* cfT mL`Fh~~2~ ë z_%AN?Mdg"ޡQ m읃鹯a8= 0IނdvG rs j&oԩF%jm@R)U6mWɐ 欭՜s ZJ(ý%C&, m:,f, %VY2Pk5-aiKS@#!!8-0wSSƫYJ}QYJ}yw00oi0U,pZ A{F1vu(=hOUExU*K,GNRw'ė9X,5p]*ܑ$H}J[ߨ5 p I  |2>kT&&RШ6l^F̞C0hgzb/|]٪lҼá5&i1Ӏ6ץJ1dZ|P:-vFau:/K E콰ʿ&7aSLL _Sd3掠6tٺ݈~5IFŏdǤҹ#Qλw5-ըv^JwS̰( n ;pHL6ڴ_1@ /YZOڈEYlP8 gAޠPr 5(-"*^L4p'':pVZTѲqmDꋒj#R_?G؀ъf|c IMO>54̳gnl..K ](5z>OPxl0 qR0^ң=TH6*rU4ׄT\S;+(?qs6nЛ+P.x. '3OS a =RL՝jqfdXt)x8uPGK}2f|r>LNK1vFSp{z|~eO7z,C?~s^iYC|¿֕FC O1v}׌qNo>$?q~ h]ߏa,J6v^D] IbsÅ*tE:=7ڮ%5 @sO\߭ifԼ?H SB U EjVT-D@?ԣ]MϐB=0w+K9d 0iiMcqz|k0Li8{֘ަ)p:$qiƵր6T6Q?wK˫eqs nbw:/wV$./0}VhVɹv9x;U,nkVWnvVxvBձC\5WJ,&W{'\4A_SsqHK!;hΏq"6αe;?Ž=HBk  ]MnR9׷;5*c ~}7]ɽ}.@%X"u'dj#H<;/:KYkD̨F5Yh/ H˦vVxy$ʡ+)鳘>)鳘>^ Qڳ+L0+K2THk 2Jp ^/JJ},engm]&nc{adi hwnj@JAtU:A0J0% Nx1N6Ǹ!ZOQV*F∧ :;N%zBc*8-5)ǽQp?Va2Ju MX奀_T#+1ꀑIS$>Jad2:|EL"Sm6jKS>qd8=2IkҠI\H`NT}QWk>|q (e W*RzPF:b]UD20U@S@5-%+8Ω,C`J$6%8ZA0$Zɫ# "+A`86+121Cƈ 1L-)/A > UD)1 Lc|g)T\*a B:PvQCUšz,VddCә$|VNhG?{8QO}ygH؎c;IU߇--ٔu#1Txx9-ߧ_I 4&XoT"ꗣ*W7$Pk%:aS+aRGj{E4*, ˍxodțyQNN'gb“pS >>xKRVVO4^6SF"k ;A?>>A+T~Q`/SL6k&!UOW6#n?77QQ{TE1S5&HJJWEc]1i2 C/f|S!o~rԨn+P~U~UF 98b+0SֱInVR`ިn^;3JI{frD39!k 1)N"e*@M ky08g)$zH C c 2rty8to.lc:))`9c pX8TdwͷWƒ!wͧ&5ONHUưij@ӿ!Fk 8PQ=rl T)&MPDԑ%u8[U3^X7OKd, v<]f8(y 7ĭMVOƪԿ1k/K?q)/]7ؼl싞VnҵM6K^&h YY6zc{cVd]_<᠈ϢcPB;a(~Mf=M>Ž֎-7;EFkMUuFAgfz>Xx :脤_EHW H6to" 4t.SNE|<i񬒻TS}3tBW^ s 6Mç ~*87G6{#'YwP 6L\^7z) ]?)56;`g nc˼aP~-FB.lYwGU̜]Bso>t'a`N #)T0?[D]ԡFF)c{;u{j(Sgd bqaB'3} O2*4Q o;:ώ{Yw埦U;( jThp&D C$`yOPƛ/`SqIHȹ] 7.(ś[~c:F+hVDQ#B>MŜxG R ` ׁPk9JWL)X<ޞp8jUBIكZ/JLXT@RhC +$q#1Fh@ ^ \#Tv5Oƣ]% HEOa dlP\2M!R8aY2Bn.E;!ZƖɣ,3IL`D@ o,y[d4DŴwݽ4xKlgiYeOQٕPT>_&L[plW.@ft>{-9Nkv#2:$]r@Uxwg[MXRȀR0 B2j9GPq )75P""Kd>/Zt)" d A[ ,!EGJqn]#ARp^^h'tz\ wW ) ڗ֥:dSSNݴ/qxLrC73fHz]ewJ$bdKw:4Ɛh˥wWSHumX~00wĔZ&~LGn6sK4saL$TJtS@U V_б2aʁyYH'xj_}T⑔ T#2.s, 5Iu8Q^K+2d4o0$@ƖexČeCƪL8j00#7Ft\ʻ+#]0(u}m͑o`r;c="㨀~- H)뼙¶ysAYb-f0`ekIT&ED )q o,pfpSTj8xqgc[<GSo8ȸ{.Мn'ϮnèhqbrQe$NBp~Ar5-?c dtQH[8w6 f}1y|FTqfsBJYb*JOq'tT69߬X{rwn:o\܈74V{!dqOxynD˨YLSy0)L*NDCo<7ǃYF*'0U5 q. SpbSeGLVTy#a6Ӓ CrbdVS@z%L} FNMKbYn0\H.sbjLd==~8υlD'ʐmi *y4&hDY S @Y|Y)j8Θ`xB#J.ieDk, ̢ kj-хN\E[9~`HCs)*"(eZ &-% 7F ُ6N$Xs?;^[fUPТAj k*E_,a+ GƄ's#o *!RWـcFj@QaV!?yE=Q '0z*:NU^Ť9=QJ.yOT;2"׎OF\(ڎ]HiQ ;ynڴ(K׻'mň]9~G{-g*q,;џ wp95 wG AQ0冓 Za|LIo) 6֏Y9BNcvW(1p8?Vӗhݨ{vGt2)9.w>cbeUi4o`¸8~21}Q w6~9NbonI`XKUMJŨD b cN:bM|Y_ʮڱEgE1$-2:OMV.;5y@ N_6@xlgiRIZ:NepT xNӁcDNأirvI}qx~mlB^;V*Q;x=*^8xSt)GwnSnI'-7q4,/ޅ@:mE2Y*Q91Qz[ N0ֈc.(= >5JHq([HU3DQwGڃn@ QPkdC WAfMH@Y] Z*" f,rmA<N=E\mi͸nF!nYL@F3f`U<$ \NQ`c@6 %1cxN^-v<] 9޸xz.Gi)ߚ I.ҌOe OK" <=L$?FW:q(1ggfbOxG{ <`@7?Zn[mr6w>\jl54FN\'S`M@+ ;pdV%Ñ5_4Q;!u⿔hߍ@9CJyF ?}0xFpf^Vֿp3$,oO[o|l/Kg,6zuq,L`8,~ȿ Vogg^?^t~b)M'^Eīo,}U&vya_][O#I+_Yd ՑF;YK% 0nFO]EU0jKkF'~3~싘h[3>^ʊW[r7'jڗ Njy.C5b!ϵs zR@r#ܴ#Cҭi㙔TDRqBu\F.h-ڑDTtPnW޿{hBzx 2'._Y粹:M)ϜH)]䦣|?Շ}-/^),+,nܠTE?輓̅f6h.M.IHMZvVJ+@"PNNZr{ףpv<%lDFB a(Crs~bNoٗsbXIcbV4)ɨ;iPoO~߿ +_'8Λ\0VI) 0ܵuG.eVEM6h^Lyp|L+职+Ja? YU· l*\,wT I[{q&(EWv462ƙxqϿϞ[lOIB!pJ FB1e\A𚰃y/玗:ۄx]4|3-q+7q#A_rQ([]sc&m%ɓŸ 1 Y35vZsRqu oU,l@.f FQa-6Ө@0(; h:)WnaS4$]Sb;.dn䛧, o"1FooOU>cĪƇ1rKL!ø9c# k 8Js&|)m-o_/k`JdA#UX0I+t! Z*"ǃFN8 ffs"S:gpJN`G/3<@z`гTqFt%_/śCw5thrMK) rWf(h-F,Ws{eӓq'{4[oMߌJTvǽxKv=_;  ,1J; <{<Fs iِgЩ~WIB6%v*{ɔa]˓@(y2Ft@ N6OwݭSS ؼ rc! :uΉv?m T&[>۲DƮVg& w7T7r֣?*Fr{(+ nkHI:z~Z ,\\tUоko~W]9f 4-8(թC|Us-9! 醙@vh{vn*$h<=5 *SкS܋ceg,(G~/;C2Y{8W-V_OKpfGt32eF |Uaij_Yznκ866y$QÅ`mj_qARf4\ornSgꫯMp'8}L 3R"ڞ̴ *nС/lE΢)sm{mN=\a秧 X'6<1tS?IKR:bq?i=vcftZJw qulVAc!XeQ s"͟ `J fv|:Kqs 0§?"w}5Z7**/ QF QI:V7W "! 4HX`3eA :2`’@9tdGz(?w+x<ǩ)oS; 8vnsjޞtNs^@tv<:8\fǸ, L(Qc;8IŃCCnjֱzLaTW>8Ǫ?&nu9pݱ(Xa * 7A.z"2 YXS]FKYi[9=J eP}!sOф}Jv޶}X n}PS]c@fscN)RX7;5Ῡֲwe2tt.#7~ne֋%y3C!h x*ƪ\C]O-h٩]K$:gYij=G :(}]E"B#CKp35ᡰ:pPP8 VMo&NȞU%:JIj ) æg/klTt^U፩1tCW%\22UYV9?U>9;NGW <3>Hrs%Z*S "Va+ f`^3/3Yyar{T#D`UD۫H0d%,Tԇ<;Ji%9RΡQ)?"$UGK_~/C* D{uFû;5)LAe D'FApG."RB0B§Xvggp WĂ_z2ZP]xY.!\z*P :^/ ib5>;ufH#m0K)U=Z!q==h<}5뤘wc8̎T9OƒQXZȍ  rx 9.Q IT^Y9]]BϤN )hIErJiBpz&8:YxjH^OV8,ʚ4 Uur AA3FTs4HuZg8AB5H̴UH+ ُ!'l3١`2o5YJPήw}b [{mK%&]WTi D֜q4vqޫF[=?g-[ ;k|$㒪מFT J(o^ 127f/\ThOi>uV%H?F{_W%MU'xMǽ(XmRRjzMԷ|]Ac@K Q;f-mžF x_޹+ܑ"ꮍr5N+Gc p#őw:e8Uиwhν^ٙ;-Ptǫޫ$݄K׭2T4ia'F9fƻ]Q\Qye&YP4=ɛTH*`W>WR6ZoHXR!JRݨߤ[湒ҊF1dk`58 rS67hM'0Ź1Y͗ףd x2 TAFqշ#8Ӽ,5k1PMC+#UŔX .bi y6}r-&wb56j?Rr"-ab݁IM ɝܩ؟ҙ8,\]G" Jg7O/>=XnP9 ڤ1IO|_>/?1g܍!?\ ߞSc . zWb|/ף(!\%hoFv2x5o.9|>=ڳ-07Tt8.?a!׻0֚NM6V-by>? bq 涖+pfԃ,<RNMח%~ǃ+b&#?"w ͺQQy1Rkv+!։H.n௎/TǣTSy\Pa3 <ƍ*9тPH(ٻ9nW ,Cٺ"i}]<4ts`RtWOz$qP%Q}Ejs9+/Xn)f^MлW\&zVMkyU_Sd_o.-G/2kGp hG2vQ~†+9RюF([)Wj>3se-׏3f֭1)(fwi1㇛?Z4=sW.g_R*A66! ϾΗþ9tΚs83nF n؃?У <v,WWӾF#t|8>@uw:#n'fi/H:➐xdʷ>f7fn<17 EFft1I/~F6!4S"7p=vJq=[Nw:S,zJEO )A)AV},06Fҽs!@  W97U]|) IΩ?\ѹv90:%qFN@uPca 41sQ`Ѝaob39̈} pvO4/7VGl {޴yVfh7CL9PwzP0dlj}$+b ۚbUSP咔MQYVnMWyU+ա"F u`Pj"Z%9TڨhJ= -y LZN,4EdS<?Naw1SMĮ њ0{)f/ňucPڝxk˲lcK#pҝn{Ըl7Fɠ6+yG%vh^ FIgCxtCZ@+K@ $Ͳ xmegCr䷥pT>M]G݀FU˓?\q#\UU67P.fٻj{6AZºoݟffВx 7\q{P=BMZz[ⱥaS(EO?:HiJG]=|`2# du{èk1nRje(k3;{ww#0$goz!ātg/ٻ-1@%Z4h@KKÝƸ̉0]z4wšO37=gdT'L0+r:b}w%,P[i &+ r }Vukjw)7g!jơUEi6||YB `6BtS$,Y]2S:ؓ/+*A=@ct ZG FYTKevZ6KN$@h)ۘY\?F|(d(HrXZЊ /s `Vrg]1~os:t_xHZ'i5ooM;Lez׼=<8vv{x#{PyY ,֦nHx_z+ȑwzt.=ɍzae"Wdf3̱̳x8;0+WWphDpz^6n6R6P# 9\uɥŝVa(-Ƀ%CNml[GNqYTS`B^8MxG#MP.~# hu-)fP2Skt[-d?I췈8x RjuWzWV+tSwb͖ i>0y&:!t`:bT\4DO&>!6e߾C>y~^m?ua޾J*)8q[K'}yss~ljSedvߑs‡GstL#I  TY|Jsztnx_=L1'kKNJէ|Jmtt>kG;[Y䴀"vuQUD[V 7mЃ桗uL+wO"RJT{`ŴM ZZvaXS; ǔ_?lbʁv#ڤx1\Kr7bQVCIo(l>zv0 H֙uZ#JL ?ݟ+%woPЛ>Ӈt:ppz?כNS )(8;-҄ '1S6wtzta;gF;&S"펾;}cGAݻ(b)Xgo=6ܻG [ȍ]ЦEA; G63ePq{9| ͸RƓ= L<`!Ҧ;!:9ߢEjn%zxFfn }9R!'f{Gf?&"ۓ;j 8|ݝLp jn>M@*!8k^ (MKMnGUw HMXnN fڲi 3?$@v wo,Bsx[[^`˸ uYB 6)v6.[p&*N.f_G @w7BV}i[i2>~ %}״PyI?舃E6vDmZa>\ .9Ά9 tꁁ8 +v7:6|:gR`^JI.6/^ =^ =ŰbZ"3ElGs玠E_^_ 9'7EEP%f'%-?*uO)dZw4cS131a H,g'{`/mSVE')j8Achwa{ l¼bsڎ7qgjԨ4#3ϕǔo'<1`+E*_O&1u$H$69|3_0#ʳ/Y!dr5 +)iTIY)/F~IvL`h=:cK`زrľ)زD?4TMwD\kE,- K!Xj}2TKJZ6^RJ;_UL[-*O(`kZ- LPzR {>ZC;_ 4d/jćBrD-׈YZSdˣ1,~'1׷+Swx69#os<@^ +X1٬@ڤgZ쳛*#x/W*e @,ygښq_qu1I$y}ܚݷ3SS:N|e';e-Z8vII> ,yPbϳ E)%PF=(o)`T-hj&M,j0>1Ʊ22P<+d!!qb:yiwCP0:Z%-Jh=7(lh[!Woebe4nQr';L OH~F)ܔ)lB'dcvS&`'leL6ɦYwHAsɵ"{xyoU^g z^ȭE#!WtNʖx'eD$i[P|$k?YXZO}$ocA{Ώnr˒n?4hxSݬ2q*:SDa%y+_j'!9 ێOw9Obr}:X dmš Տ7UЕu 2n"P-fsɫ:HXmM:$APzH!Ec;~2"T,IF˰lK'2U^~X:5ǿ/j,<}ISYO 9+f߯Yoէ?9䋳YS&[sgߘ?]f>U_𼝥Λ{rvUmI?nJ7k_ŏjj*X8pq!MjٸnInξ՘/WW\ȬW]M3bx3{9M z=mg 7^|1wm6<'bKڛeg_ʷ#&gX0cC=C]Ӑޣ-Y# l+*Έ %_E[/(vbӆ=_el=z}~D.s+6?Y0J!Sms)H Y:9hݫk%iVK(:WU~"RɅ u Vv؁UV/nA`^ݤd^Y~ Lth {Ů\TLփIDD2 DcP1}%k.I{&UVaEQĢ}mG?r@Xv`M{Rۣ?@m甑DP̠qKx+q oә] v;*i&KFf!if SE%d4`%9<{dj=b9:4w@t'2\+Pt8WE3~U͖ZZz"ԾUu?^4aTIW#yz#@-.9+;, nĂrr\'{uGyPk ֲc/CCÎ ,abu6iHȞ6-%$RtXFԛܓCi2&d@+/~N:@Ayji ^C 28}"dOrvKN>]kWCXP0[̈́KL3o<ѱMfj %KM/<%iy-*_sNE꾟bN}Hr0^7Kɬ42ϣVPY&}tGײ>bv-ㆻ 9;+!lNvMnv Ю -A;fl[ ^G|&h7A \7#> ` w_ :$w zt_7b` s.~.E 30X3l !Č'mE8qso^ +xA o_`m./ I~ÄNv8/E~ x4Srvp1d5tSNJϩ4zߒnj+ŠY%QI|2ٽVdGܞճ3;;z/?9:-} gs- Xl6؝ҚEFvq_Y `I+K NyW~v={Ky(t4t _0Lt8?e^bsBV族{Cn8x&h"pξ%hxKWyQMlOnvY_b ҭ!LG R?*2B-eTRDV8REl-ނE&[ "22R@[-9E+P[!ZcEgqt28&j6 %#] 5IJQ]dEkϝcv7wHɸ O7_?%2zA1y2FЌʪjRq#2UPt*TBtVsta$ 2F.yCP+]J( ~D%%cڜRAF4)?X^f0K|V Yd彉z 1EoZ8USHXi)c+uبMYb(YjZr5IsFa7+ssrc^N7gm 3zrQXVSa] NWBdtǼEN?Er!W@(U'*cս%"\"ڛ)dURh6ȗcy9bs< -OhyE.yP#3kСXPe'/ S'zJBȵYA; n <Ӧqw##t酸kG4]A{{TqQ\m&-~{e.6*I-gmìL"q3z yq-G:KAS@yH]2f 㫣,Rg߲"ϲZ̪* *hSA,V1ˉl/T){e+a>*, ꌗ(7r2M `+1xj.BB=&zĕYfv,"2[%a,#}wg|K,3MNfa|!``DkhL29V<ޤ1XLhr!1?DVI ̵xF`0$u2tC8^d$nݜVPb [ADi RKd,f~^f&n~ ]jt* :xbR0!Rט  VRUm,ӐB!eR[|>QN|5 4Fv>EsVxBK^Z@ơHfMC[j!i[#DZfLڰL˾`iY1Cqk;9( \A?!oFVn߾\,/\-!-xfzgVe"a,XS5ZV\R俘2+N 0<ƫc;N רB͂8S6{%ajZ,UBYC F/M-eb-m>Nj1Uȋ*I&vfHxgׇq?-+ N|{Ͽi ;V^m>ZtҺ|Y㇦Qߔz7kP9"Cg8 >%g9b|kփΈnHK^R c>(>Si3#2 UPkF`iRan"X>Тi.C Bh8! $3Ӑu표1 8w-m,ʭhoSeJeOY^G냦$jD顆ei9:4gh(jF]AɕBPr6ES Vv(CO!QFgN'd>؝OkD*dӥQZlˡ Ǖch+A, ž$,ևk=!W˅*!i?P< ͻȺAZuTn` +AɡYL8K$\I` Ns. H0' c{ _^k~/DP}.ﯰ|'&kR])Uv,Q/UM]]+Djź.@+V bE h b:O3-Ve<. R$O%q5?)h*@~;1k: AFu'?C®*Cb5L-QѥQT^E_!lT"syDe 䙒 YS[s Is9qq0Udw F%o9qlNo|*N/7QNܦO_||「;]4u=U6Q,LoӪƏ~/r??r{:{]b2>sU/[yTrgo͒s}3@/}NЛ&c}cAD_Vi²xp`\:|qw2}SZa ֊ `jV u8?nmP|PѨ\@%W $侶iZK a IZ?W0 S/bD%c '뺒3.e4b&egڼ)f(N11T;UD 05v66kM y7$Idy UI/ꬩHE,_mIP4)#ѡѠ5[7%K z$R&GVcvT"S]xvymOb@W!M峺FԖzQM](DuveTBU6I%1,MiY-&!R0FOs'^%Q Gb"Zu1cVFY>3 (Lf;NnyH9>u!CG}ݜmRSUd(nYqHMM!gĖb_dojuj/vT0u(N9g5γ(jEU%g+=x{g5ph=+ٴ{x(%ӎIh6e\rHc _Φ5̕^i'%e9 zS |Ֆwi7vf{tmm'[2Xޘa)"ٴMҽi̦r9>GG}YeaQ^$lꩩy%fx-s\nK -.]ҡSYQ|̜:~|N.YNèɧgZzH|UV87xmk'·pݖ9NRlg&lDO\Vk]^\V-seofp(`FxvnwYF7vJ޸.[OE%Ӝ,;w>4vΨx)x,4 zD%Q1{& MXl{H4EwIiړ 7]wIw/g33NS>] 6d6 ImC< yZl*{_avuVx;l0OKu y5վ8C﬑?FZbYhe6 Lb{|d[yc&Kl!6tg!Vc'~Tg[v*Ӗa_@Ɓ{&3dh-AG躰`k/cl7VC^֌V%l5hynEM5{ <~ 8̈&Z|G㞻|kE|h]*:@g*K`I,m Ӧ{Vo+$+?"ɷ ?>KCג7w.ލZ$mGc4e#Q O+z ,3sh PEA ĞA94( m $6BͶ(v*g1bYGAf9A!^ۘmZm܃}%ʢ F,'Il9T/6*ToRj oFXTOoD:C@hROUPrHM!6~ k;b,hC~hWs,VdK dBFDQAG7}ރFWʈ jy><{4GVvw4O~di Qo.M15gqf?б-#&|K17`߄r ;s@}b㳣~mȄX77ن%WEu1wC:5b/[eVY/(VRUɵT$/,xDA" hܰ oZ` j*ԝŪk'0#b,l5!$t@)4Foz_;DmqjH[ #_t;O^+#NgQg[M.'C]~ k;5;ɩܡ/܁E bң4hG.PU@ql": VN1r,")":VIN8]yNJ{s@i/U;Yo^tV@ҝ϶K ̨(PUjDU.Zb.6(ZvgځO&R& ̖6O"ۨ1|:L!bf57it]En kKzX Yat^ b\/f򩝣mj+@krmjG:#XPZD8$5/ʓ;`yWrQCT)&NzP0vS "3A`pR5P;G3 Aj%Ђ֨dD|#GO1U&Z͝uޏ=;gs>Gh ;OC@f>Y`s;tWŔSi_K7[rS9- xo_>켷0Vݏ&DuVm2zzd$]{LgZ=୒n5H:c[Ȟs Z)%rtϊAxvtG; h\ KM­F}N+ty?!iD,(8v}zNzz!j\D۟r|O[TWw6b.+#Mj=)ҭM_KDRٷ0/#>lf˭[i$-ږDw_.)z?zh'n$r&.oc1Ji_x&4{8ڭ;ӣS :R踈m&k\z`?}fOK=]VU#J,[ y5.#BvE;^+K聐G%S98Kl~Ap#ol@5.Ʊu  ;lm,{;w5ojC\  0:[ޓqpB%61i8qFL ϟnOѾj21 d͐eĀvO"VFM/Ɩ=^H9PPd+%$xAV Nrp*HV%pjت- 2roL&:0fӣD#k%{ o5%۴DɔhҨ 8HvUSzhevaOy*ZE^y]iUĕ}殑(?O?9]rR嗱("YK["}չ j "͘RiH1Fz|d=qBA6dJ,E)#] "e98hs[dTF{8cRӐM%B,.AS1! DQH*'EQFCBP) $cloZzjf6r7׾1K,umKo"fb1 YBM%JRP$ L){MBn])gkm#2NH01Y/4ƅ un4c0B2!ޤEc<+O*B_yIƫad<(b\FH2Ќ.c 5JvGHLK3}%֪Ej jgZBXbC 27 Ф2zڵ7^RHjT`)Tv4dBQ赋n l)MSJo(BiVhIAHLs (d\6 Ȋ6@t^1eY,.R,"G&"7%x,ȗ7ǜr6@9oy@pQ(eh"A@2ArmBhFpߤuk"iJ-d 82M h>BA  0gFHr4bCc,]3I 2gvFG4c)5aԊ;Efwg.]WWDy;y:ʬ׫'8UOYRR*)cڟI2zBʁ>\%;RYun<!YY8peY^Ѕ)tH墋e!zYIW)s>s9&ol^_^OM~HAwڤ덪diteU$F_]U4}w*ePtya؟Ձ?sp̞R\{S_{ZH&S̉B{mtq B(y[K’\֫:7ߚ6F.cXiY Iq9I!ultYdS y\'h>FY !ߴ&bO8汧7 nz4 OodO g*g A--":c,t#)+k-c=uۦVOt':@ m%;uf)3F晏2|7Zږi-bCwXFL#hFr CR2Z<:R(Gmgl1Y,JBay_ѐ!&X4"Ȉ-lC d;umkcsvSV(vLi(j<,Jb"[},h _6xL0ozxPm7^xjP|q>MzuyXbr x9c"j-`"0!nxhQ,fAq߮iU*f"tS&&ʹ$ GPBO:jn? s@/2!m~I陊Бԕ)Lx;i?>kO/vZ& A(Vєݛ~&? `pJI5Pjg }mv^X/~>\^^fܾHVL6p@p*GG%_|dU4=L|c6Y=fn[P/Dpiy&g6~pY}]u->Mz=[T ͞ݨlAg%ˋ OULc j<DGz4\SŇ|{xJ \9砈&oAr|Z/ώ%eÃmba ZVS_3o~zWmQ=T>OPmr9y:!Q4ǩ ^<ѐIt<}ܠ˟GtnL ;@;R`f^smlţ8n^-h}Lэf/ - ʪcסBO mcARNv6C'5tKOTd>;, O)0D9h&yݓWR>MUI-᪴ueN:KM93!VeGંg+3%u{JE-CRARF~oUjIEǂu`}ƵB,;( Y1)45abFҕ<+nlʙT"&2o!5\)7dܶ5U6ʈh(o-Q!0V{liGn#vZF͋1bf,vyҐA_dzzig;FGA ϝ; ݸߊǗA8c8C/G YFk!wҶoQk6m濶Sߍց׎7vׁ1#NCU~bƓi:h{*Ho:I'rhGv#6XJ $P,;-bE#؁{v?]j_\EG|56?>VF2U9ؤFYq~Afl̿h1 >]H'V7U'=Ih?zh*A0KG hGӿߤSt06Qs8& 4cFgiĶtZvNVkho5]7RK16 kT<V\)H)/@饣.{=.j'\ h_e+tL}©ث~MC69y~[N~x) n!^"`S:uZRV,z-3x$HB/3xBkq<^624cȱih`%љqZ&"zӷ4i #uh N+ER*d/V4dEj,fSKh &[ H:G!tL9nEoɷa/_{+=}rЭy;i*sʋ&SNu-zy7h{VB`J+ԁ(p$7DLӐFHh%4`hrXl:bmd9!"T=!L~&eJL>dA0v @pU̓J||9 6Ç F&drY)pR \6JxϾѬ'DQJ΢๜Kq@;oĝ}6"7ZhzJCU%+%YCnzB ,)}qyn2L(rE4'!!, H.Z=b.V"Le9W¢sZH y]}:#I9;Oި 065m"\6ꪺgEEnZaϠ|W6\?X0n;5V1óy=nn 큋|?->OOf&gKB}.ݠ5^w2&vx^| i.OnU1m?C)pebCu5XvѰ!> wwֲF`[*<[RiwLyݫ77?W,>WN)"y`<~iF?tbKo+qE1LD:[";|'UVc$zٻHn$W,) ^@?1/[c]Vݶ !e$fefUj+"Aed@*-WNd"N?]\{w1O{T5g|wJ)UFǜhr"*[$Nj^lB ׻oW]"ӆw b Cbmѩ@k:d ZZYUex| ӆi~h _D ̎) ێQpډ{oyݺ^s7{6XmL@eKXc{]#nW`BC\~}V-/ܳ5l#Q"F3`/惐%qZF:G?-$%u:sȵ$| =aD8"ӆw bg9"PBR~CkWnlO&>˼kdr"<)YM jJxVϪ3l> d2#FcZ S=I+Mb,{G!`4ƾƅ2a$7\0Z\ d&u BdMPP *kiy xb'r0qѧ0ɇ"{(vȍ;eUDB)U+xt&dn6R\&ǭ ђ@=N":4g56)G F*Qq̜^t7kzf1` ɂ&Qi!D"]CpZ^0B{`7Po)bG7PMG ;7hX -"/rvEJY ̊\c*~vw[ʊc?\tG*EULU}j:;$-G:[gef>?*Z5n$׋OSL1a 7\AI0e?. EbRYϼ3VbBFnIB9RpF$ŻD!#?F5ʹ@7ZJ'N\Eі01I 0l+{}ۦ>WO։4d[a%M|!@i5Ƀ(}BhjFը'I`eC0*R+ltA)8+9g%ps8l) I_-7B ɕ YH0"s2Gi6*,LEz_/ߚQ)rΕ\8ʀ1yE1y;̴Ybw[̎* 8*(E䞓X:LZ䏎ų1.p^$W$#׷7?n<"[AլW ڏk 5O8{{" :~oOv?!" ^q94Q(%0(S|"eeC˚^ &b Aj/z 8 eV?*=҈#fba&knDaUATD4M\N̪DuN8&dF..dFutF"븊AwJ?ᄑ#hQ&#|wchElPo ;j!`V5bHIHiE RHMbFIH8v$͊SYٖU 2̣w13R 1}t d'S`mI("[eo4Yl 7: !RO E0*ry3Nf3>O!U7̫ aMgU$h`^# \L0)&4wVӉZeS|&\>("QAi7#'{%W H*Rr,[]dO\iH {I˛7۳⚞,wF=UeD e(mb }4P &i^raWq55JF;y\;{ʷQY%Lp&D]uZ}8fϟy!zC/vR˽0}2i[n$M^.~DqjmB=[A(bGaT}.d=m)iRD +C[,3$`8yYF2!蟂qd84%[{+5Oߜ{0GdJx|51|i@:e=-C4fEܿ;;Ta΢ƩFr - ^3j6ccLTǣzIRpPyoZ)i~Fqξͱ:FWG!K}K!]buR\Cɐ~*ҎЯÿ:cÿ3(!zLT &: 2FѸߥ=PI 8aJZ_* d4`0"ࣳ|t>jƑDiέCm}410+ن! ^f5W{(r }Q8}_=+DdwI_ǯ[1hI-JYN-:c<,g6&-#YӘ)qK/=DOԒN ̒`BJ$d$YcApc9jD2U$mdsc$rdgmbP2~ i . E \~tl-d@("!ܕzޙI#PZ-jn'Rfoo<ь=Wek:;\`(UB~eu$,W%:̊$yP SAR%Dr&ekZ=v- :B3pqEUe^Z D cQ@gF&6=.w:IY1i5 S$k >MhtR~D0BT\EIX4*NʒS8-C۸RI-p;LqRNB&;:umL%e+͠;z 0iC0\IQw,I%l,+6EJx/V+u&*6k6ppt)IJEzNǁ)1Z[\dMN(TD!Rk"LI;K0|`[uI]ɵ\m{cGFj[+x#pR^tzNug:&ّ17c9H2$/k-Vv{?S/l{2\[@͕~ dn(=T - + K;룮UV9ArŘtȤ*S,jX-"2 HōZ',S/ωR;/ em1CKhp F T~fd١r#MjO&[ V cЊ}Jv%CJ( >pjJ tyUq)1L@ Oܙ#SKړt!$#YH:H֖ @w$gݕےCq|" 2(bȆiiK8ő>S9s z%h!&L cJnDjcFi1$h*BM|$9ڮFlȈYUU%/U!GH;ŃbzyB {.+\WENe > nVn+n7G(G7LT;|n ^y!C,xj~K}8hq,+H_YB俒ofM 3B-9•A>3|sr׫igVG-ۖ˫_CLҬ/YWh{]>}د!,[vG7`rh=WҀD3 { 0(4nmZre1,#噳ua^ɝ.G5 {ZvӶrED#b+*?Z#ֶ/Vپ/vA6pzىp:x.u I_{53:-6`hU IJ5޸@hr6k|lZ X6 ^F,JKt58ŸkGe㕑KG;'i^zVuAV ` ڵ.ҝ@m_-K/X/H&g3s$>B 1z!>OI=-:\z4=QǕ]Y TVqBT2ݞIݤ26T,ƢЍA7P䘵BȩYy³@z}s嗯*Wb}Ueb_'O%$SlBcjcUŊɮT\"'\p؊ٮ$VaUL*Grk|e/LNHHQi+SJ"%? p{!'%5XD2J&C!C[LtRG❌^2d9jUBb(yΈ1w4; 櫛usD"(1*իP %(.\2~. =,~Y+Q5Aw+SJz-Di󟮮Z bh1)5E^R̉ƴ<ן|YDIkᐣ|{}y4eyx0--&̺"gZ۸_QޛF*}z.֭ȕ`p(َ+6f$ ({j298ho]%w;~ o%iYE `>>vfQ}~Qx\`+zen_Ώ8LaT^f0w~,1E(\)j)uQƳ^oiS/rkz%?Uhp3,l!( >:Az>Splt=Uuv pF"|*@/^#R?^wU0-.}5Ԧ UɻNb7'-8:YZ:tkg;4G3eu|8,Gctq /P"7ʋ" co)6c?9?vC6,e1q ^/pHд>'vfc^M$T0+[*)~vqzisڌKwJ]%9M1&ހ@;zA W2ͱ2gڵ| `z*rZ/oyaP*3;JS>*Ln~jF|;@MjaB/UP`.F<`>7fo#Y)Pw.J oߏmQkHdcώ}hhؔ$3Jp$1ݦ :/3nAqaNv4nRkqC5wH떪['mM蛚 WhZ}Ex4V_u}1Tq>;߃d8e0b "ŬJ$BʱLq{ڇJDLk#'jUuBۯT wڭKOT:Ky$N ݻ}P7(wh(";[GCj, 64vklVnUVmtV z|-Yymi)paC*Q `dgUcHizVkF젌З%`'NHNZ7. )$wqwDzP Җ׷x߳Ö׷yAϓ\? 7zUr(KiHe5kDqM31@6iL-*FX%[.rmU9rɖK\O |(B #$f(q`RZ.% s}^6 !oP *Y^\\kd%[.Y+6.I54s6N℥$)G4#i=TXIĮq31@6Rwfz8Ì-lYd]yƩoYd"[kMz(dt'' hEBB6[ǿ1 0&-چVhΜ?(˲`pI"$Vu1:FExSEI[}OE3VǜXyJx1b+,p%CΦX*Rω1<Ëkϓg5ALPD{=10BcuJ)<)vFIQ98Nk ߐ _?L2|Bf%Jx 񦒏!"=b@u Q8E ZM8]46)s{Q#6e:*s[VV-nU.U-j f-1Cozyk7̧y@Ӈ2,O&"aފ$EN|wQLRk%q9JSxF̣LȔ$tZpL`lbY1(*uRlTpM݂(ho] nq?Ԉ˜ZgCHWՌ <:¤VסJT[>ȗIx%g9HmgzAч!P_xxogz_sA4Aih"?oĞ'z' *6D6;i@X+4|TNI,%wrH)Tb0ayeRfjW6.BhQ˦Bbw28 p岔{h5!0La lPEɘn&6O(׃2872`A14FIL*7KdGY">YPhM Sa3Yf,ƹ _Li7\F$DmR}Sfw@w/l ER3L*wEQyQ[ZT uhZ<`znxGQ1,w҈ja4YI r퐴Hp[ `B8AdF0!核獍!19)!²Gd\YfSHc̜7s0Uݞjvn\F?vDɧuDbMƤ eTy/0uqD6e{$\50aIgؐ(,Dfpj]QdeJr%E`LX*  sa0ndC !I0 xS$25:egXq4DHW"0֤m r ЋZO9XՍM^q>YBMb^Vnlqgy%+ Ņ1ߗ g!AR޿0I\1oJA.lˮ`+ o[>Ubv!߹6)NU89iuquFbo֭!mh#=Tƹbݸ[S bToԱnEps^[Z]hwDydi,$4-0G>\ҵ(L)R2 cr@4J Hx i8J][o7+ ,;-~F e'Hg^HikGZq=jIXj N [v\x.b3]0yEyq4-d.HU<&?;ZO_z!εRa$Oŏc3*ܥ*;a<{T A:Vۮg,sOD=DL.nOw?i++cfn&֤ 7Q |sw9E]Q_z@uvT'WDbTNf-xy,$m 9I4TMG_@Яyhf aFRa*Z*\%V`qh"+&Pi 12F څS}n ՕКKO FHlݚ-=Pfv#)b8Y )Q;ua!ؕuC=C؄a-t"끎t,]׈sRk9B(7 4U>6ry=m/WNu/+>,ҏ IjBŵ36K;]Dh#Rp7ۇ:?>V''`#*ImR].6GԚѵw[piQey,d7O&?yBO;;z):!{4:QHU ss'+@''\58Q dbO+[)N,I,+sm>=#nZܳ&4*]ݦggK@#f8BNWXU3eÕyẌa쩡ѣ?Fa~*r}L]ɗП_../cm< :fN Dg|6sl eIRF R"j^=cj$khWg @s1QHĨ; F YW $Q`N< $XC\Tk&S&-zS*gr؁9^%H0ќ53 5l:| p5B?{4c +^+Hq* תPTXѾ Xĭ{r)Z祥ZYU9JU-̡dIFA1uvE8*(ZG1aTmY/NJ}POAПxL#0|꺁 _¨>Ub#]bQ,e>֜]b`0ל^F2cޝ]?Xufێg 9br +MlB1Ru,rὌċY\%"o@s-Hzb*=e$(%1jd)ϽFcf+&+ : @ 5jsu8ȗ{-^7`> HYȗ{>ma.XR%.EjA6NB eEx0q"J"*`wWj$ 5B⥁}É);F R--U9{iw*Srt5`Cqtx[a|גw$rNӄc܋ZWU$8:fXݿ=^iŨhN/Xf"ܯrOtQuRp2Pr2@$cl$:Mnjr)p,Q*8( F;.QhB%\!#\7C v<"0D&={m15snfnRBOrn[t[ҥ۟ҪuE 9p%Sph=AciUml z֋{8onoTպz<'yoެtP>Λ&H4Uuw >_֊yo! T{ѳK%)(m_lr]K wK%^2+҂H(m-]Q21gG"H2"Mò "֐v{JNZJηmZ['U؁sH*'KcѺ>W-3Ti$*WWhgBZ!N,,$C3Eʊ+BARj0%|FJZ

E 5 I/$IÑ#KcBKŜZJg:nAͣhW{F23;sg# Z~c%VymP:LzчsH ^\f/bnc`f.ĦpTNp8#** ~SV,멥&Y4-1 & d$ wrXM0*8Ҵs]d4 B g0GhqYynR3HRKa |4`Ͱ==f?ݨ7{SW\39wѵa>)̗qJ~ a3Yo(n.+߭◲:ʯ.ܫQՉ`2JRs˖Dɥ6~{RԖ:mFeA3<\=À3Qpen7Mƶ ;9~t$=)"{v5"bI_Khk.b~웺2jUߓD,jME~;؋V- )?Ȕ& U|}cP N_Da_.\;'79WHkUz'.bg S%=N]T6cUMu.Spݼ6.ץ}N~K;zEFhpfCGqZKL]';iҌ3&Ra)pv*lQ.c}FIؽFhf"0q:h҇r37`#I%ZG^fa^l0sN$S5AMg%ы_Oe_ ),LrTR`|mfQBI$^>XܚhGaVώN~5w''B=;<%zd+ˆ;NmG<=Wڞ6΂M6V[.K6*U-_H\8FVO!fW V+?K7S{cAQͯ{tG BkAd^&mr_}IvK ϑ3'`N9-ӆgLVLo[|ͮllF`=Z& N4 A¤_/3]NCDkAۿT' =8\Ӄ$qv C6;=HA!璹"Rj<y<|VM=ߪVHXuN$Hsi.rv.M tj34 F=Hũ=7\Ђޔ[2EkE)484f(ǢdW!ž]u©q%Ir{@? WY̹Ikg3SoQPܻw /.㥝;I2;i`lp!n)|MXz;) EpC,:Τ 0:/`&M^L @̬OE&kfT۝/ͪ%5iT*ZIܥ !T[﷠+Sw!}mkxM,Vp\Z\yN'W7c)O8Y;cq|>)e:n|mS& ϱ)fA˅}CEJ{;r!F%u<=u9P)Siq< R̺҃ p=)1jAKf C TF+r_R*L32-L&D"Z'U&SB?d18jL$Y=-8f :T,WS7.*KyF*KcE=$7 3%;@@cgyVXw& >DP!J?ftq*vOy!-վT=Uru)FN#ʈ.H$rT"u F##,7) :yۭC@Z7gw̙5f/y0:hlfB7TԹRIJ `)VansSFW_~Jz (Kzj 'aQI3[TT{o"[9@lBo!EQ,PT`)@9f`+%s0XXD X(2%)rBgq4H֥2E ,>>2P0 pL%'~* :frx: YqʁXcJv3+<#΃e`:()2 l)isJ_oGZAoofZa}UyD]eAIHe(QJ8 Qkl3(Ѣ(r T5'{C y!~3bt/}ݠv`,pqe&Bj`yLsAvyWW/ЄkQV20xS`Uҁטbvy=]V?ŕ?]+/t}qynhb8I>O!NpmI^ww9=9Y:( >7L[!\+kAN+Bd!c4 b)$oRXIt`7";(1I-YxN@k>BⷝŖD@7Edj&A#&US zIc>sAJYt#)<'^6gApW,ca ycrM"lReΙeRlǝbl~xy=ܞ_8`+PX6{?Zo֣C;3R%N>GQk}AƏ vL "/`@_?ZpΎS-:]o\W ŽHIX`w[oZ{ܴ]Ԍxf3s<;h{:"#ERun&~?^\ͭY7U{1 }w?#L`$!1 h3ʺ|Ge uQu#:i 95N;G.Q87%8WIK_Tҹ +>9XjhΫMހ҈@mз[fͶizZ H5{E#Z֞a@h8+9Sc (*S@Ř=r"0*ܪ;/ᆒC1\jI+Pk,AY%+1)VRkJxQt7cN7pbf$PQ'1+JUb`31NNP=qt^ `cݹ`Yk .(d$VkgEJŤѣD ұ!܀Y{ z`Mp38]{ i hj(Ò $2kThX)œ5MQy-ˏ I%/:ʶ sZ$cj(V~(fM?fA$.f;U+x(LtPUvey2):ǐc`ݔso#%YKڢ`8eP JYHxbML6"Z(jf$zBӌ;rhHC2Ab2n*D0kvC֛5W10Ĭ&/k,fڦ& @b),l5^5d֠\M~Pݻ5vYi5 ֧DG9 -:*+5S9d|(%*Yxeᷥ֔/[e%`8ȴ*S5(A( ĥZSMgSr` Z+КNgf@Yxف2'lYh|sحfҼO5{1xq}dK]АcCKd֬`BZ1!:SZ)uiC6}3X]FۚuLh_6B8D`{bp~sOy%4jU5yog YLYkROzw իKH05(oԖՏ,nnN^O7wD¼HH1ER?ځ]!"]O$[&aH: ^]7*NDeS+iH]dTeFՒ~o6)nZ"2 D٧Zr_7H(dvC8;X6zͱPLY`8Ԩ|k7)SPLy8HNBQEҩޕy9v%cz^κ/񣊄vď*;OF^;~#xu=Ց!Eڠ 삎ypʆ]=ڥƌ3޸1FKx3Ɲ]#Ř]:#ŚgØ._ca~)25]]hkH^0Sh?wyu+Sƣf T캀UfzjF^D4]VÜ۹psVv> ZTvg׳J4amk<;o%]9-i8\4|z8\طCHtdEVyo7^|/7vak.קVx}y&_v2;,?r%EeܑQr!Dtkf+//h]F>}ƞ6ٲ&@[-z gi."aC2NԒg| –q_Bq̞el<†d yyYCؽbI-v:_#B#z ̟#coLC,=Fx>#A#BWc~~#W;MSh3l ~څ뵠 %ftW]2l89LcpvδL6vL)%s}O6c/|ω|v =7Q.a mzj4(0nNl3ّa򽾿84TUX#Ep,֫%4.b'f/ƌkhj{ hAvQ =^Yz|BPl<= X}y@Fmky}_E2ݗ~ d}2mh":cJG{D=dt?MbEǜj*Dl#[0Z CRHLFǠu&?LۉiE "nG "fWV䷩dˉ@:6^q`|~l ڀPv3,!\E˘|٧P>*%!%Xﱌs\ qujp7OKPK<ί΢G,ko)alVGokša;%p*y^I&mUwo=qvTŸr3D&׹ZYO~̑DF&G߽ǫC˿tp!j-˃J2tnw;Ynvrr}M&W7/_uձ{J8]s"Hx>ZJn&-J_^ȸ;ؽ'kAxNrٺxtYed(m0fMmٶÏ6 ZL_|f Gu@R*2Kt7dvp\[b*ytГ8g3\k{6W|ٹoCf`:aIR0cRmAmtgDД"Z͑\ɤ8+Wk%=ci7ߍ\eye#.KεΒg>-ոo+SoxK{S\4'{iY#om_f2V4v' 6Jj9O $(fOV2[J!PkYGM*L$%bO59*y}@{4_;L iV^(ͻ24Gt^H'j*l(h ىWPU^%D]W!,b~07ow^>Q Pɶdc ( _U]kQ{GS h07 \ƱF**8@dVL# >F$ \b}HΊQjc1x"y$c|"ųX J;]]&b=J9VFB+o7>Kԇq_`VqsS|HQl?֬ Z֮ =f*LʂQ eXJd1 s<4=yh=[)߲S HP!i 6$(Y m-Hw8B(:i¥z s_S0C`u-;T0ƥ8Ƥ0/jbrupFDh1Ց`$A >~`03H_e@Yyu!\/׋Zi%*$ 1ZTEް f aw,TLzTȔBXGqI^\+7 $Wc Y2aD=yHJC+-E XU,z9iBĴ+m:e0llUIJd;XrJ.J !uMP؈O 0e9'%QҘƪ8@_LfQd_G+Ѧtǐ%*mtew@UͮdڦJ蓬FT ިE5"8ձ2% ~ .r7% 7y}A|{KD\ۋ h&DKьj0ztxOw]4&ȒcGyj[Q\<{L{]?ay=Y ?X?ofojp6U'h 5,ɐne~1I\ڭpsDy;S!hT]w3<.]qWA,;?UPV/|d)Ƭ ]0-^- E3;11z1t,G+X<.2x D2U`'BEgQ,zPw^Xa1FsZww]4wmMnaϦVlqv]8Svݔ ʺ(? R#QEtME ޳Ԃ]L]Sk-o9;fa(n>#::zH]jl'!?PU )ЌcJ+L#H}8b^2fcEsN%0iO$8V F*yg D{GՌی'6_ Sw <ē*xN$s'05>*R{RL/*$q3CņDb%%{F@.ʶAJJ두մޙy4k`ڪ⹴cnش*<#Zs}۸G~Z"NP?_*bY0"m3H%t9 x.cP\7 kC)i\ k~~[:|T6[ +Vkt T7^3p޴\w~хPS bv%88>u4^XӒ#`H7 Cuaa%èD! `Vn]׉G>t|u`I~#g7Wϟ\݂jbOCx,zWP yoӛM͝F1|9|: d.h 6! :Y"'RrX&C"+03,QqXzs/d2,5 j56 t* ǧ t{B<{F6:Ya`gР#H< B|lFgEχ>(nfijk/l:^3d bsq)bv<{Z"AsGj BR E-b9L\ V/VwmsͣF|oz q7eܶGQ¨\M6M }M^ ^(NQScϨ}ҨFpTR ZH00&7}0p&whb'W709A}ׇt !ڠ'zsb>Jf *U8ng_jː_?N3k/A~dZ /~nkI M~jfe\|9оEǠ7O RȆ4KHE%L[% uQDP $tTHJ8I([H,Bp# 4#&DSmc9y" U)D1&H<X>Cc۾T!>u~̮ZBIl2 %b+ KPGƘX*/7cHLa"P#Si*fcQ ڹpEB WإPBuKG!{_?}?yf¥dWq,XSo:\"z\ty̓fuJ!lEYdBXFO恑"?N'ݤSIwRԦxt85-4)#XvJ>sI=Uȗ%qr3Fyp3s*O *呪NY~Q'Z ޝָP+jY7!l˄PkD7!PJnBؖ !FFn8h%h8Xvx;pn8k8.4':=Saϊ0ƴSBЫm>ЇE[LyD|Rpة49_Pm%]u* ;xV.{$m?Z->xkE܍jL39;0(tV* o<~.~ℜO-|CC @wmes]&i>.Ek"JSZXԕE}>o3Xm>?1p KO6rJ،|"ZKR-uX7ju˃2S*rӨ 2مlڼ++3wiF{VK?wC(%sBcxQ/Ay3D tij ?YtّN`遝4(tZ&Ҫ>0$ ٯ6bb~C;u_;Ֆ_s^ 70.K>q0m ד0zr>rkP+>,A$J8Å خ qrթ>]j9Gs\̖'ߖg0_דGCŒe8pZ+Jb%P^^UВwE!^Hv.1 .`f!'hk||1L v΍jPȓF5 =bOuL7VK<cz' G:6Ů&i/z?B@CQ2AS0'Wqz<393c%93fp3}*}hN$B]_'&!ؔ"ϟCξ$7~\OӦz< ]Yq(_kFSN"fh_W/Q/pE4v2u:6z{>y7. OOpS{qϻd  z>-M'wrReμ'IGKP^ɧikf] GYAˠ]AFBș-S׫KvC'T8KF6X&D* KfP-Szs/>!?5^fs5{qaIVx6jЊA`'`<]2 ""1rTj? aR?M#y p/ln^Lc3eg>+ʽW~>D]p581.ϦY|ӵ^3dPbs1RgY ZTos`&*YEoaםf́>R,o@ʆ6c4|o\Xʘ%h,:O(6}S>&1I tbBL; sA X1%!` CJ QrNbI"CDSX¬W9ŷO0?.)o9d9eY%exǨBuW>Ӛukg^í ;"Ի; ^ p}h*t;M!tW;UЍvȏ@@a%>[LXΡasFR9sta;C@C(9mqU(ЅO9 Bãsל%%e֛{YX~1 L\<=# s! udHdM'bx weMrGtka GXC쇕 R#h,,dQUeefefig`0Q!zїX zc)44+zUe]Bsa`J&_^&})"ѱ;.wqm\Z9cn>Tt ?Ҏ7\š]Q+yCHVrjąĥʝսb6gן:ͧ.Ű=1FE+a<' 2ior6aٷ:7T Wzy.iG5}lNy61fQ*D{X24D wq;f G?U`1{艞 Q㱸Ԝ mEUsxP;s!ZKam%S(`/|ss"kB!&:!OO74Du?oiAs$rwgA2?qpNU+k?}h^aOo_u 'úi[}Y-$r7BKEYzTt-aZ>qR2iBBfsg붫ֻ!ͻpb015 3JZ,bDX)'I"+,QLDLR MNi"becU]?]yTkk0'>CSv'C/:L:溍y2Hu;ivq2Y[l3pܹØ2t{;NYqo4f w˅׀/n`Zw ?+: %H#B$Qʭ@q8\P{ww(ĿgV9Ax=aBB"Pc# ((L )#G8ކ4f`3ebȝ\J+Za)@prQO[ʵiN[*]ju\c(uAԸWP]>J?͈~.OZs˓C.OFPQsJ7ߗ۟K'^37¹䢶 Puu4 ˁZ uj#q ZP:՚à)umLx u.$[C>Aw~\ 7(LKss!ȟ*~p7eI.>:FFc{y<%nb}jRϞbϞJ`t%&m("%k};SøB95>95 eW*V ~Rr)["D*#DHX#2)%biA %2*RF'ʥ Iݑ-f}0DEC4(C CC{a2xP"kCB0D)Tm% }T|i5EY;zQ_G/0}ik $.[nvcBN>N8ƏT8we` k.a מ>V;Hm1Bì(0QZ4d< ` amLdZJ S1rtGR%-Y ulEߤ.ȕu?9ESԕ ~ a}L{w^ $8b44v ŹhdO~_%"âGD%JJd,phKQ`ǔ00,H !\RC`YِHqq X6twn׫fn,A7!k]a0<3cI&k6_#k:sJiN'<-3EH=GSle{8#Nj7s"Vs;Р:>y H.2QGɎ} MA IM 2 ug!ev[%i9T/Ssb͉).Ve@iXXAX-]ZZh_Rb@RIVFŖ%FX0Ih#$<6H3͚R > lSH,>sk)I.o^E]j]YTEY]MN;! ;嚁uab)Y0w8JMFQ m'Z 'Q m:Z !]_>ZS6lƒIէlS6O(X)R6J!jKpS6 o&, " q)`kjA%5Ib, sńMd,qĞOiI_$bn) #-XH͒gD0aњEa"ĈYmSѮ(˱#^FSkg U#6@>ar%adb"p LeiňFBYdģ*(>u ;3˗ XJݙy[pME|jz)ȫ+qSAhONmosj|Y?0.Å;:(C5Un%%ekQ"qaTEp$~ \,_zŃ0K=s.7ŗlp[V3^n:?lBE6([ %d`eh"&e v /)%E2L ԯy.LęoT#8i@WBt ~8 FቻD! [|4Vw Ǵwv4r{Zn:zo7w7拝Qzb֧:hywK{#2J5$Mm8;Cb:y @>L ?=`g#ȳt _ oMcNޥjO@*;mFUXov[d;AtS^řS*f~=EAx/}V+ 9Q4.οiz͢Q\S$7ŧFW0D}iM&,h.B) DI gǾ=iYtþfh#`U1ysg9/gv=3\R)I-رGxhalד!Fp<\,O@:+A ] +Bo1uw x~l~=zK 1XEnGC;.2W%Xn62JIǕy^Eȿɝ8L?Wjp.LK~p_{Oi$P []d#g^q$.M"$Ҟe3nы+Ŝ C_01?Z{ XIPL`sz/D')!(Nf*N 6k~֜ .JQ5Ğ[m, f7d?]V0$wg.XKc>ģ}O(9" ͜&~s*NƘj1^\#"Ivi 3 -M-"a1Oǂ%3OzTw \yݕ̫Qiq1Wztc9__*0ĵVħ:Yx~F5O6\w4}u@4% |'%#p6t:)){U N}kmPx=2>0Od uN['>9agcq=q^ ,!QEo@盜ף/qr=͘bi [p.| ^ݓPWںaclht5YM_o_*p+µUz͑db, Jʊ _Voz8`kEo s_CRgz=;8z=łPˆce s˫se)g/9wZ5BTݹlZrLӧL`.l{%1 3$!H1{n.;R΍\d",KRņ|2ZvinW^<_3PˣCR8ҝZyl:s_Yp<{j~WM()X*C×N^qg61 4ƥ媹%`cs=[֝陾]p;QvG!=1+X.x&G3z ~# a>\'f{۶_!V/EK˦0xufR,wDY$Ci-&i9g΅tͳ%v" nʍ>g3f¡œȒvcڵK<1>EsrO5ݬ wq;f(N^9C=ᥡ&b0ҍ׷?=Ftm V@Frfj#Kу'YyR P.١ucgq9;rELp= ņPO˧ ]R d<s  gtCyٿTgmHF+| ÿ{7X9kZxt9Y^hxf^׃~z5zU7\n<-d,"4.bV}x)`ѪS^n ݖkQnL=7* Z gbVҐ2$i,1M(0*8iYp9GFr.zКFgơi+F!ke}SƁݚ%D`g;吩MD Hlg`UJ%GD F!r-Vf HAWu/`D(AlGDvC\L0g<v0Xi{D,\JGam{yǔ1|{q^[G7Q*^뀱4+AҩMCN)p|q|Tz@%;Oz4҂JBYLdz>:˭a8_S + 3)OENƁn|NA-%d;t14 OmMA=1h㵛v>>C. GN/a^jk0|\txUB*&p+cq&2sZ:06Ð@G>)+amf$l+UT_[ĨZ(SpdI "ns<-I_B'ps A.<.(ĤmR"@ ݓZJww'+k ;rƅpE3jN BӃHP(0Y_QM -FILp=sdk!ET5b!%}Qx >Ĉvţ_G:Q]ֽ#迂sm5*fOlJz}8]0]}G[D]˽gʧi{^`D}v0Lo0AcH8a:"!>"q?P1r!|N\:3''(P$a]t_2Of Nv3M-=iK=֌ŎM6^jwYfCm&(η,3*fBv$FN]wjIHܫ E*k R;+õ2cXKh@љ~n+ObH]UIm8@+ZmQlZs{폯^{W m[ouQ~[iEBK_ L@ps"ЗD}$ b"8"̏ c(6FOh6 8a)\ @X$/c*"q&H$CXS#poS;T;@R)llGs=vpD,$`wLNYnڐ9ȭӢ=[2rXZzcժ quJ]ɕ;V 5&#@ގ{u3Pav|/_=#$ƽ8kݦDg͎؆3|E2 Mȃڨc<|o6&m<냌2%7* vooұŪmP )\ }zz)r~JXw֮.8yDU 6T,GC@IP~4gg .%`N7(~xq%КPP=zorq]t.ƩT _twdF[aDh}:b?E3rt׭ ᗯQ:pNΘTM,oQ~ q`׽}֮}OQ{ ʪ U:,l?ij(pٻ>X}>CЋ_yM>. !&\cBh:β;z_W_s,+#@òoO*\`(cL[jOB @]ժ,Ec{,0Ev}d9BܚٰP4`wTwJRaDTzȼsjf_Dcx$;}wpjH} klO4Dg6{y(?P},կjybEH|o?yj 5VH*5`*7%;Wz˕m@bIT3[fqLֶKεY0"ruC㝯r":Z wvKNxJ;$捗HJCuKu0sIT推 ۡ#"rsÇW~%FƆbw0BCѯ̟f^ CZS*ֱkqXpaKJ=,xW#lG.z^?J=i#~lL>< z‡G=Q_< {11w֣,y76|VaS%.UQv<?0tsKo90yJR᧾s{.u0dٮwJw>;8% Q!# $]{Avb,.liII<<۰L?n_&?o,\Cm@d*xS5 VP]$vyq51Q:[ʊ+`WD-8M"?SԔ͗S6 |5\ 5 > ^W)Dk7 5BˣybΖ$@`]K &o1q Z}˕梂8{!P4FBz Qҫ>^,j;9剀ܜ_fSVhqz3}-=7O.r~ i򳅥r(iFX ]ziY,J-T#17t{ufM-g~02m,ly= ŚAy5O69Z!C Ч&\2evYXȍY ?y .M=ݿb,{&Q /nI5aR|L\:_Im-|y%xVvbaHa]Č2\5/sC9 '=-!FK?w b3ul){W`"ל5<߬!q=qHwJj_W)=ϑ=}AsV_tY N Y aC)s; gġRC[t6M:tS lHn&pߝhy8flV45?M}D~B*}UfI@@=̚F($Ч;匄ؒR+wyO>{M՘A`'aʵ NT'=eO3xMkmEj%?TD٨6 P"'3I +hh0C,$$@ wAMZ%ҠN7J8=K]JkJBB"j%SFb^#f뵇x0"&`*YwEF]Z!s;f ʁ1&R0bhҠV`&t)Y+.XMk LUD#AhG;}sD~yO@OOcaw]< F[+7YJn`Rrz4wxo?oVyyZ\~gS=]XT0wʘ: tYqaɑ?d3lXm+`[\.f6.O%)Sݬm_sQp~Γxy!<Lh%z1>u@sH."D<ʔ6[\BT$T"")i+Ĥb​yQuqi(uǠw8*$zI}=8 ̚6FlÖOMraUK2ɀ&P3{1𨦍l.-Vڹ#jxsl4"'*.1CX'&/? Q\0VCNH6u["wEz ӨbǞ3qz&_#@(R$7`?{PQ!/<* #'0p9=kJ깼^ »9&o?˃r.W ly"^`sPqTáf@Jw%m9`'06WsTx` c8,c>h Ա-"̆#C,1d;׫ɒ!,x ޠs@A(B1 ^BAכ(Pn +{^"c#3w8 vNC7= .9 e ._j/{nR%ٲ2mSR.?,+ A,aE/-Lueֽu{_u7/Py3P9SdhCF#ٲf ASDMRC:C$ sb&hu(3O^Ē@Qc 3Dd @= A@bKuB"͓$9fnW(PfP4)(IP"K2yLdf J0#0xU2+J *lj(7,WƼ۫V)?'EJY'b jc ,Fik `EI&IfPP353$$3>H tK֨ 7 I8È䀷*9cTRPfw|[>|ZĖaMjgžO1&.z(}^ [kA_}m0N/~~޾|~Ux`Qɣ6]}|N l7`~͕qb3]{N~{e>Qwrͦw-a!₄^pP8PBTMw|A1h9axؘ@:t]$7{ұ6¥4x9f$uRNER ScI$w@4`mbHH$mT`l_N GJ9jsf;(}w=SCSa0*',CfT{e ` ģfifRK ]Wx[Sxо[-&;vw6n189/$D~Է?l,f;Ao+jпMVo橾RAm/9Lf Kx bSUHLFGBU!;?nb͙ȼT.Sy}TQxvAKJؚRv;*YKY`n4Y6?if7l4H_ǣfmM?LfH| Nҟy -XZ'YB{$9mt\TS*fkF9'm­Qkpn
#ic%a2ݧn\Tz}F3I~BFū_CVN'~J@_ 䭊4]9ϭ! ~MYzքH8}k񅞾5SpyL#Tݿ8N`1F[5uH%O%]P|J=0"_ͭJK*OM$;zz/pSUU;Y׈-;NYlXaCwnskKCa)$oh*$T༴B1 J@[fЬÂ"t@ ^;X1L obp6ZP./OBL!#cN9o”#\ƀS)YdV1I!"KNb 3@a84 Hi1L51_3!U B\f8IbJ52?X"2*JS"bKԟrǦ1 Ymۀ rޯ2å ;!>NY Ar2c}+,!@Ȃ+)7 :Ėgg`k-ʮؓJ/ "6|t3p6;(|=ewROfƗ\>J_,pɗ7&f6t=b_FiO =2V3Z=\_Coqt<]=zu3J'tRO9ڰ%GO47<|dw;Yr ]Ln^p0hkT4pXج 0R$ejwʃ7o:^>O d1>1.iL-2T+t&#mpQf30qBEq茓P`AX4&*Vb1l҃&Fq!L#s,m 4"/hGT.fTK&Jt~)kűz*vvҹs/NڣF9I\ͽT0qx;5^*X-|3& }/OuTjj?yWvreZiZ#݆b:BA}F=r&Ůa!bS~sD9#-Z-t5rַZrCim a-p{u^u{8Ex ZNBlV!oxǓ_x_T7U$4wd'%#V>ʬe2uxXKXc)LH4CRUs d1$fLZiĞm^Iʋ^zד0>Wч2.wHuwkwg|AŠrMA mQ8zZ<) CRTv7v܍RTw5|eqϷanQ?ƾ5=5*# )$e":I%c1,%$IJu0)+% bnd1ݴ4}i2k;s}0m؛Y28J n&QƐAqR%/ZLjʦ˫ Ȍ}-#$`c~T} -XS%~P ZtZIр%H&v-~=8PEEc x1̈́p+x qg4g<)-vsi]>{_밹qֳyTkJs6Fq,^sT簕dk\tGvC~sϝQ8iݭCh *w6TvT#ROp)X%lF"Ўd4/e;rZ?yK#!Kھ#ǃsٻFndWg#$${ XGr$yfSduZb H,u#bX4Z0t߯ Z?߰bKN*X H;zw8-:sْ)ײ(ji6M'9M)-UhTI:~}8?6tW+L·XeK;R$W\ ΂_Ljvbv;ڟ|Mۛ䷡P2pYHmj)pD1iHH7!MDGU|dZN٘p?5Y'42,ijE3D"dcn5Zh5/P%v9~S&w6!NY IvǘliwJ]ӵQlI2uGq[R1d,ءWh)lYZ+V?AT{ yCu!-ƆLVzahc'DX*uV⯪R=,"38=UTCwLfKWo|Gvf\n[=;B*!q}j?m4YUχeI#1>{~-Em~#9"IQ<%% ѻeze$Lsƹܖp(#d:J0ؒ;L(J:#E8^vGc$ O`^:l/vAl.0 g|B)Dz{#0dhbj-mĎ|rbJȠ|(3f ns銃O_j$P88>C\B<26UEV,{?1cwC="\2bAc(Ϟe":uP,d.u&`]pP P_Ń'Xϧ#7?fxIp3OB](mPneR VdӊTAE˗,xbƘT'P Rj"VR+,!%Au`qZqgvLa"Hi)a(*Q NfV&`V'哩gl?YBa׌ق3dU$T+5H+[B 96=U' 9t~SA4qHD-S /Ab} qE91VP,:B#H&i"D1vx;yZwp3c1(QBK#V,H0( @sBS<"yr\#TE-C,=2hi<&`)2҅P6p5UU:t ]G0J8EFtĠ͸jӊpe~:}}1s˭ۘ׃o' ?sl%,~'Xt\>oo-2; w߼wXݪUi;6$lPe?X[]ʚC`Ow;r8&㥿/g7{sÓn?͵ 8m<[ YxPˇxxi|P|(~ǻE= _"?툶1J!PP{ x]:anNLqzïB)5I<:3YbN/h߼hWU_Wն^Bu)Ut*c-PrO˷2w%J;dd<8OEt(ˤU~ &H:>ֱ;AӲ>6ۃmBpiӧ-BvRN -A!]-IPҌ9;)C_ 1gL ׭hSć *fInX$Z G|Z4D|b 鄕3.2N:X'쪵$Bx[FՉ]AIxCk#Uq[Z]Yc8Ѵ kJ0.(xVRtv#.%/Ac &% Nhdm>|̕<*[ RV)׷;ޏ?fs8d6ؗj@ix꽵tvi6q~*Rw<4_;t46k`Wp_leUxA.WyZO7E*a.Ceױ?>HjqQ P,7RؘlpܷMU.ДldK?-3INFe?[5/u| atp]UefE68[U'Jsf#ڰuQ]8K&)jX }>ԗÞ +?z3}34}kZ?.+λ%wKKN7f?/G{x3<+yTyOeR =<%Z =#v:t#vRb[uZ%rq1Rt榣yz3-G5; oG\JQY3/]x4[l}ΜP$%9fcgHȞZȞ$7typYYӤ̠;~:]GZde5'pfnr(UkrqyR0ceV0Y!:BK[5P0j{cwP"|sVR:n8<}{/4` ׺\tu_d} xkңZayUZP`hAs:1I݌}DfF)<7ɇҹ݌\BR/I5[d%Nԕ+oAj5 /E3wDO%: yͩnQnxs.x&窊}gs'AfcztݫOw۸^Ո6‚t&t!ullwlTl@+ӄ|fX})">L%|w$eqQK㵣]̙7nowb[ĉe$Vb Rـ8*8]Iu 2GfVx+0sDa`&&Î "?_GN?zq&S9i+ K;[{y/Z{_LrSv.yN)?H][>|eݭ;#hL^!??vϔN7ϕڙ-3_:RFDŊ[IT\]-/mtc*o5>a֫ogt.+sV%:ip=7#^jnjx 4BE0o#nQ};ȥm>oـv̲YLI$~s;9h3^mn*)_rb\Օ}b3OYeC2й}P4>33Q"7*_w?"WzMrc+k3?eSɹ/YD0j9[R8B$'gLΏPh)OVX$g4$+1BNv=72ZцݱyγI}Sn_ILJuuPv(g#].!dpzddpZ +0rWki=R(0jJ͟G׳Bҕu)AnCɰT=^i P}^&LgC\˽X2(5чxOr$99@;|xJU {rZvPӇ=3!1eˑۈ.: 8vmPrOuBDirG)](Ǒ[hʠ_/oJS/pXo޼xx,=cm-+dxL's8ĩRM8VEVyLi ϣk-k C!q͘-FZ_fAi%H CoJ4A[@RZpg wZJs 0&8M$C)D0$ A)Q `|;I$)GVRa10Q1EĬԄP4!AP7- u FH "xPB9l:` b"  *jhc1cx0`&TAB VXB 5u L (!3&vǸ))$/t;b `ƾ -4E#HFBk*AB1~- jiRP*T9q@>X)\wYBmL&w0,+knŗ W!ciRJbWeR5zS$Ŏs+) dC v,g,HdIqy .]d.9ϗHK?_ă3=XJ݇w;f}@@)6 D$&I'=C˃o0R̄A{8ҋ?f=Ndrot58fΚf ^z{~ _tzcO.3p-:XOعNH-$\(YaK ƥ{i'c+ͫ^\,J چ%łbxkq(UcچŢŸX }9fuG63mK,Z%!Z%[%SUZ@RbY_8q1c1yUa xJ,(TNIPDN SXb-[Ū8Y\p{PZioXkoX`n76̫ǒɒ.u jO0vXG.kY$|-Տ,,jt ^pmC4dL1咋ST02}BsE! d+)ampLXO@5 Tt`4) &8'bI+H j]ҵLYs%N})pN_Ik1L,*W{; 2*`=J(\l^i(Nƥq%Ydթ>@u%cEyxs7QAWh$+bN17)Ofv_Dr5iHU4I>wӈCn<:ƻ54GJ7Tѻա!-W$VRyPEuRwk. eZeDM-wCCZI:%'z7)»Ę[cbi庰[rrUV\Et/@lz7VRyPEuRwke/GАhNX|?ʃJ)glUUheZee]UGU 5hHU4E("л_ؠwKAI}6ޭ&)PM-}IaEV\Eto|z%`TxP-e M5EhrU!]CZI:>nJDdʃ*m[s SG얾-@EV\EtA~bs-T'[yAUVkݒ7jАhN)|l[մo]_{׾N1|t jH@j -d4ń`vuZ!TB|qVvZ"ǶP]]^ׇN)m;s+j̜)z|5fj]VKPGF1w5:-A`͏, dj]VK_YH՘s :{|KWcnEY]1KuWcj̵ZGXc81w5:,AmdYsNj̭1+Ǝp3 I՘swEmƬ$]1j |՛M֘՘Pc.GWc&hu56ԘAv38ѭcj̵ZF]`FPWcj̵Z՘ Dv5\%FL0Q]1j 1dWc>3Sd$W@oVcd2}o_~)EN`>0,qo?ecIf߮J5Ki2ƅ˔E$˹Q\* /f Ϝ30r:GF%u(D`%@S-'y_/n? \K+ѲP"q2E l2ou@(@X0$PBnt" :~r-h}7TٛpU< 0ǯ?PTY6,"Z80XpG@"6Ep 8e4̦0upŕڇ9y{,&L?O3Za4yޜw`h)OjݣiCK+u9ϗ|W^͂f}u-gf1w~6+ 99?9/]0/n O< ha}FzB_ϯ0kf9A~ڇ8+zY-gd UL i9h#bLc,ݷ{קSBQ.u,a#Ԓ 퉹)_*QO `$6B`$=!flzsEFs`XŐ%[P}I~d&†a W\+# }\1sgQ;3f4C՘@2A)9(hDEAV9k+iVA@R4%4p+ 7<+JO\֥o@k&˙E2a3ޛn.[`XB=,HM0 8>l^/XMer1Pqқ'ܸDX믃ޏ~ D3ԛ#Ȣ& sUI_O1wŠ#v#&\|0kl ﳏןdKDStݛo~>?SRTx. ^`vi'ד oC2?EHq-U&d|Wlލ/}/ח`89y ۻEr[3{6L 3S.({/ ù'2 HejsK;.Ҿ$i=]|pVMҫ[v} XF45榝p W+1գ¢"g3ɹsꙤݽJ.`|ǫl1P,w| λ^iR=/YSY\К%Ñ/F:WB0SǑH>*9ei~L8*vRb 6rcmSs)^L3&Ƒc਴.w$hYK"Հr$Jm. ^hlvuwk:_;'*qcc.ei%͉` ;w$"˕gRnpHN1ū(  ﭅IUoEbk"پd+&3&j",u*(mg]p,r$9bWm8ڸ~?W>#a) $0ƻ& N㊜iJhpʆ` њ99˜y(F{csc+(pmhk-F*r QAS 9PkJ`*+A$`gRJ1sPFz6@u܃D6Q!}qx7ɧ:,z oޮ$6kVWvA;}f|?>1׽׿_h W&م_p4DGVW {1d\a\3qhC)W: I}kdBrV Ѝ:gSwXs9<ɽ+s;c7]wm*h{!5_PNŪVr;e9ѥ N{*W |i:m>;LHR*~0_{7 ܵVڌU"atŒ? ^x$><U8X0RJ`嵿+'btWo޽HID7.bM [l( 5tsv|sp߱iaN\0! ښIw ՈSB{{_άYR<3EX;LJ^.-$wq= ٠2mn}ζHT(" Ȍ'3e1B- BVgdrڹm܃CbL/Uj[]Yo#ǵ+^.:0O\pnB`m(v~O54氚P9xfVusiM q9Y:Ϝ^]Y,=cyPuV>0)5q :9PT{[qOPASb:Gۜpq4B(( tGư_˕-xMy1H.ћ V~nJ 5槑f . d%ƣ V#t"3!GRhסXrLwbTg `NqnK/ٛuлQ0􎸣؅db`̏ЗD4͎thvDhw߻SM(#wlӉ]<|iC &7/<)~Y]?b"d{bk8=F:*7Σu.%͒#tww.S21ƉkNeoOڈ*) z. T!VN]qH"`*Z_;lJ,!I}-46NnyY-G&L8c]~tHG5Hh-蟆̰Qov|JoV|N&Ia3ԓ45,§H+0"I]ߦz1H##c{w_Q*g$Ǚ>)77V( BM8<=\`),s6(!_joYضؐJ׈/A6=GˈzptWeeV:jOK E&/_|'w8w~Q?dg@Y>}<Ԁ):3-H4PGOV.u( 1f;tqVʨ,}q1]H2Y$lIL]ɬ Z>1> 'C]ѩy᪳1(0F ){eQmJBDU~C[TE`NEw4׺$4A+Xmeau-?3785FW(SZ9h@KFs℣I2NRɻך+&NNIP .d eI "r"@tDjab6y ~c h՜'ض{fJmfYD;h6id+fSA0jç&)ZbmΌ"&.fߟ/NO)IO|R?4 r'>_Gx |}Z쫛T> =Rzx\$9| nKm_h7V{fLp$@0'LLOZ[~wxbd jzqј`W._m!ǐvzaYs ai~T/&-2}#DG PC:.)ŀ5٨!$ 4fXLaخ ZH=O"/}cn8\1@=^\^潕.a΀@+g/ͬiwE03|EKfSZB࿭Y2. .|r"$԰Z8ğ/ s/4po/[4(ArsોA Q4NM%Tp YQ '7cXrم $4{a=;p?ŗ;" QFgPLVDbB \18)i8(L1qc&M WE!2@._w?eNBQ@ʹ$ {ϒOy& F)&xh~Z}: 03!HR_ 1zh?]=ݖ <:6ĎR.WRnTSkUPpDTh}vMz8YSoB(ͻ#7]6y$vXF/TG#3*hŀyZ&h%M,r2EG{805U G{;- }N9Cc iDRAJs8gݪD)V%'=DI…$TIkpimpH7<7ֹu)CC.bJ:yKЖ*hb \|x;6.n,`%k.kON%NJE+NEjSInxjuڿ+P5h"MR^{cD9Nf03՘?{ ʄJc6Pa)$a|'B'`m-s;T9I/,w@~Lho׵":_ח7?,׀[{wuD tkONwiEN7l( 8%s|I2>6^mS^xwl)Bmq]AD!zb.yo i|Q^9zp[;Fq]p k :1 Pzgx61.p > @RaBM=Q/xP+:} Ueb[!0)Qc/xhKGG{Bn/$VԌv׺1~ swFB*5[t}uxݾ4TPa/mKzٲ0z+ϾX~e'-"^>cT16m>99}]87f\ƖG'籥!7FJcCgtVH lɩlƗ}AeXF){Ǿ)Nze'O^|GB͉q71'Nj 'b.8=Wv> :>яI@xNr G=mj[W&Yf6;]2Ђ).$It~eӓSލ:0*97vo%rrNpTrjҰc V}1uMj6|-mŢzcUqrb|::o$3c$%Kv[ [f=d,L7s.8MnH6m;gep}u}7۞_/Pv倨zM17k*[/($=rqϷwŏr%z4/X{+#i>\.On'uAqpUwGlZ~T`}a uMto ǫKR,;'} crl7ol.^~Of~wW`΂z? 9M76P&g ?Z6,5I(цp2z˘^ri<m&m}7w\\IK3`B e5:1/W}.Mbv'4%ui_AQPc J^_U2Wf|>r:U*Wt%Nj;G5ۨM58{zS׶Lh'CMv΅VEF,̴e}ǎV栕/ZmɨqX;jPQY ;pggGe9QGn&qTcЂD__DLzlςmV!+]M=~=Rwɔ&`Jml^ 0tr`jI4cHoɆ1ci&@#rJ/ UR{. KY ŋ;׺a/_˴ݻ67FimɖnW@@-ѮnnIFh5x;X,W~J@qA;rY.i>cGJ>rh`\B 2F(92͔F1.r_=݅OqP*ޱzUP&~/؟Y ؟Y3`? Lϯ+eQFf`\Б;em(LN1wNr>6ɏe6?lyxwa:K垟}_daߞ(q4~ w}JFco-3CڧJ 1t'Z&uI>+볢Ϛ)&:eh e?K+(,L&FCyBl!0Y{^[˷EkW++^[X6 |16m 2m@WBY;h9(^h}6R.ˀ6HLU$^ IvG {AYXO>=~ZzlnoOUOG;y2!`+k欜2^gsʵ}f>}֐C#/'ڰB-{NLzo?/ܺ˛ ]1^5aL_ys}}wA&uіGӓ w{ty8≒Pak\NG2kz΅_]PeZCE ȧ#7x:sjo=5c)=$[JR(E9 W+\ B%hmP3kyfG 3eL4< 1@?Q^`Պ7v_z:? -,!/pSK;pE`N, \L !au1vx+DPn \8yRES|L vgh4`ϓY-w'OΌ9mni bkdy,VHB# .oOSq[@3a${ݧ.Q3w!@ \)*0𖔣(}4B ݠoO^IWӮR![K5TZ矕rVϚ͆J%9 'Jubgɐs7AHpKdiBٺRj[5XT(!c cH!>P1 ʥo+Tls(L[ H2pCĝu],7Z2t\yg;+nYqΚn^Әj2-&P|tJ&H))I)7'. cH#"Z;mΘ6>H]0S\4Jws6VT!9.PA@DLrR F't" 6{$3DPGa$.굜yd>e%)]%,=!8v.tK`$Ү~Y $[>Ri\ݳ+k}:L#b׶k|}:_l_GOyU9a]=^NFt534hңtCJ;ѭ݌s׌Y*6fd1耱 mtc{- i+m-lK Zkn<ࣗvreKiEe&w$o)02IM6f491S^C=0MJG[ uFд$@uN$4A˾69x'-X$!Ni '@YT(|P(3@=GWAOj~|h]6JhîzGu%h1H ֌T]4Z|&q]^9 6׮ym,>~ !FHܴr2 M&˞C\!ĝf#h9eTLOcJL:Bu i}.ce_PXsrƼ"Ei"mFΚO<3Z0ɥ3&r H,; 3]/wps~"gC⦼jbIQ@~'@".C kce%A xt_ƫDC^kx5c!-LL86a.]Srw>o*:,ݘݮ VN5+~wULW>-bwu[ci(O9H{,!~?~|Y|[i3n nV.Bw$b\y!'DnW*X p5"- Z?__*V[WXU+&;K䥪6cV<yVm^ȳڥ"È:\$YO8C rwѓˏ_BziɲW=ciH%7;E^H4(ʪ{GFaYe+LTxޅgJȱb[#@.{tg{KH-)vKʒbn H||=>qbG-mhUgPc'r&qRȼ1ꇑ=lĦ6'͜WÄDjzxCG?fͬ#2pmS+Y0$svڹF } ,H=kRJYwHCStԉ$VǶjA QmA dBuktr`4^D#ۿI6mo$SVIjun>A:<>o}(ypJ<1y ⭀T>i6}qո\^‘f/gXKשOWw- O+جkgKzۋ4ٶ(?u>?s8?eb2_6w5?fjӾ#4]n{Ejn:/'ҙ~Q^AG#"GxIF9bBKUX.)zWb\><91Ӽq}'nvuv=lk`BѮyCO.R<8ypSg65vMpy{l|gAw燳Z{vu\ooڏ]= ~ O R7N>^K= m_'r;`HڨkS1iNf]^VlR{vrC,ڟ05 +e˒<'?JWeoG7&֣; ǩS~>Z\_ɢ[?y{ovp QV,#t0D!rb0cs&T%PdH'"zaf1c=UaXp P&9LF 1< f8CBeVֆ+#Ž?pR b#Jț[r(_"AsC#O x뽆< 0W$u >lNʠ=u0\N.3oZ"dC_-6Q<5ѯίII tfk,t(e"aӞ`&$YkdhR P`§CY*U}Q%nA%jD%m)IxGB2w<R0sqBQ>_CH$M䜄hT>Q ǜJ1fgRRHS.LIbOvgɎKqWJ=3g="'ӄ sp ;˅54i9kq}8adD{y#UV+U`\ C@Hx `Г8cs!l4Jς[J?d|IOհ餠hIQS# PaڶZdP6'FF@i :%j'~jis $+?O p,[WvEEJ:Phw+s( lz梢4v8'ʜtvbqr@^'@r_u6_rX:K",4㩥Fwufڋ5o*3 迮Tx6$ogYkWg49xP!0_gjeV{[{ zz$ ic7lL|D/ԇhxH `ܔ E%}|ݗ,"P)ο֏!`R4矺oe:\%ҩl~۞$aҳG!y@B]|!Ų`clt2UU«FimbD/o9߆'2fۃguD yˬXRHę2G bH¾^GDI<' 5k#-ףi WcKj5)hHF yF` C"5!%4jRC4a`! GNpf9FrA9)3D}Tx\eOGҁ&&@'kCCO&'BR.CO&ߐżj֡5ŝ[{>݌o53Gf~ PАי| . ɏ? AqO}y`ؑDr$< k4LknFzih%B{Mexr[LŤ%fwhyxw""YnBRh@ ЅӃCpZ-"\rpx1G^uR`HڳIWq3(Wq[ B`thB-׵՝_Ȁ#/:s 8NYI 2|be wV7_Vbusq]&,/Eg/w|O*/1sҲS"AIC8&dj1jy@jJ]^,%d@/H ҩՋPI0R[]UDç3i)P%gX,@*C # *d{bMhUW%\U S,$&|$PVպت Y!ՊWϬ[  dNLo: aSx;ZTL(\ZJ)+B 1Ycr[90b-u9?pՄ:tA8ꪖ.(N0|\]cp/'F=@\UGd^kUJVpB lxoOLv$jΉGbOI)~}Oai9#w$y @hO O,8[ËyP/_k5] 9+$N[Bif=wN_^]};* !E⵿xG,6",˵/2c Ohw^.NͤJy M+ ?k-4^;vfx6g-jAif"\84"JYk*R#R=Yޘ[T7 RGQG!Tlu )S>}0w+M' U)BDjgxЗzX75b m*S@h53ԥkkT>59VԯJ jޢ,aTVS4 ˥.81\6:>y YʆE<t'/F/guBc# ~jQ ROa:&)Mևy1>[5: 8  Wm1QV'ѲִFW{)oS 1'?Vf[ 1.#G"GIQ05D۰\PIPh +[B˫=:[7ń f xվy h y:w3Cˊq::x]z) 1(2)"Fcѕz$䦧F%OG}(s썞93I e]Hnh3p9}jSiԸrk\#Hs u05ք<XJ`N'5v6B e#bi1#-^YcHEUiieprũ:]cqqZ .k z!,=SY=SS*uxrd+JvPe4zfulB4܈ւ`$reL1/xsh"x>OO ~,bOuzL ~:c ]19qd~p˵*LSy5dž֌UzVYZL-fU ~LgG]<Ǟp,!>-qǐR|/q^ggŻckfGC7p=p78%؍Y/%%" N Ock4A1dTojYS{QQ^.3u qٞE ۼQQ$}#YiZ8P H1^ JW 6 n+{, a8dLZe,͉ ?@jDHS$tbƨ@@:T##^'H2اz5+RFVz#Z[32)I9T&H0FBNfe$Ay"]=^An<|MHB&0|7xTN @'0wh4}H)K>u8oJWRT fa%9 [aOk iZC[hxG'9:am.4`Pb ?UMJJ+|v]+!ޮqWշA~n+>nUcB<hU58K @zfo^p%b7-PWYd䄗B,/3ЦF_BK]4&Hqkw7` t+^ 䙇'Ƚ:QÑ8!5/77 1،X ©Ձ 13GcH~NxVKWW׺"JRS唰RpwH,߆t9Ց0 k:G7dTssX=ùùOR*uf$8˛ep- R?i_K15vv#|"6ЂnY,==Řô5WkM*!şmkliOElG0FK1Fm :mRЍf͈f}3Bvo? fDخoFmk&EPSDs} ;YBe]͡%&vuMF[<SDЁ8ky3aԘkу݃#2wWWF, XOE*5꿺|vyGI_ci8|~DEگNҬNҀ%-=ER<B9vzu]…@8\/wMm$( j\xxxyccP Z K6&7%1HjwbqΣ2YGs-CX "eI+A2 JU .JPC0Urx)YlGȷd o4DJ6m uDqJq{c%˘sVTn{%Lٽ:Rt͢ڌla\NЫfxB_jԥf[$zm#cn9wtapwHJ6!FQ5_bT̫ ̈yE̛a&j67Hm fț0; J5n861hV㲱bǟ;re%6"l@܂f6]%"p|7̓Fׄ`ͥ"p^^qRFTU0f}P% N_2PM-Щ1^t0!Iրr@'1ZWX;Fǐiy'tSb4@bɢ+ҷ:y !6%wy#qMeHL'י>T- :~t)֤d^ SL֟s`댎?"=f}ʅ#eKVՈzDoμp6\R>\"`@LGKXSi$߯PEMs^Q' JVWp0(2\h[4ϏqHCИ߾ 'h Cb4be䚈~uPsƺ, # fݠ*ك>?vA%0A |rrHiW c`[=ZMژyy?#E>od Vbw+/~]V1A+?`uRGB˳qztSQ5?W\,օ 'n; Υb_׹4&F)5G:ksKExB6,agϮûj;\0|NnN/fJCl(So. Oce={9%x4˟' lO?6_z<NZO~iieh#l:=P1chcORLYKw^s-I|ڜ2`VWnbM):iuF s xÁӼdqO׾y{qxIL$܄hVQf5MgLe!^TmkNO"p!J3.GUAa|eW !bHwСfriBD!;=;Ad4ev9Y3iD_Ǩ$.F2;l[Q@'Ln-f46B-%M61P'/Nx8D|Ё`%!*Ob1l\&,HN@gUYBmfN kßF1hm/wc#So@!m6FGqϝQJ7v*v7ZXۆWv0& A%Wa5u,kXX!$c#e6FbqzIIm84׎ ҎlcWIΠw* pJ8*5adw˞d=8 :FeTR\߫ᄥy=8NT7i;1qƲ_.|0ʕ /)AkTOnԳf N/Fi42rA8uK..Z(dr<0U`+Qr CPEP>%TZ f*}o!#FMac9kĩKr\ʈMBRǍJs8X ΐ6OS2. 6T m|\ x鶖Qnk~.Q$6= m-}s v_t/G^mZ 0~m]9!OXD&8tqlϢ@# \欙=G0[7l<ZOY6^t|7|gaO1j 4^h@6-)ɹ3Ez|wv43o[;O+O)xjN{g/߼_pix3{;w޶wmzzg 홃ww^ٞtw0q_y~ysROO{g?G^1ۧ,+ixؔ>v/z]/ӓvP#_t-|΄NN mk*;񛞟ZO;Vh<uNv,sv0:=w17̸ W2K3:"zZaL} DDFı QP@/Z.T8o/㭰(-Aꨖ<lm 0dW:rjأĄ0Kq92;$l:dͭfh/ļ?5+vd )Pd,@`.(FZ¨=UJe-o%`6N?JZkse +-L m)RV,D;e)$+m1`"S9%d>@P p&"&R$^hD$ :g$K^@S x:::\_ ڦkA!ed"$f(qE'@r(UI șd/\q^D%s%8bJ旸v4P cx m&Gr!08Fёvʰ%`UA2UĨJwW?)O * ,[37B`^U.^NJFtk" 0[XVzB:&e)TUγWZSAEЩ@a0-^POK: 2 @/ pAKpPliB3%x>,KQI`J+<;\a\t8J`f]H/0Q 4b9q$fXQ)Y .F*IBaԫȂd)*tr=$\`Lߟ"L}X+8j>V/\^Ь)k]h]koǒ+?GwW q" ~t$Q&)?}H5C")6 c83uTuћ#W)2JFVNgI匬":0YIa%|טEu| 3 МG?E$47߯\c>LP`fZW$/q#wCǩX2 )ק].QYÁ啕xШ糈8V O8*H[scb\PeS^Գ DGiwP+U'{jpԛ~t:|勑<'?Ԩu菷O>ht 7W^>!xы_yףc?:7<ѳ'>9zGZƵFX>wzVbKr]_G^rʾlω; S R}4ONl4?ߨʇK=nj& *{ީl-B^}zk zavG{{4MO@gğG<|}F?w=9>诲x1^҅CNRlRَۭNv2Gsk+`Yë=#%gE؏y 䧻3d84Ʒ'ܺ >:&_}; 8)?˧^ۇX:z9<)[_ GՋ٫nw0F֗A>5t''OW'8{{2?o[^_~NmO+,M:#M5y^y_8;^_k`yŧ1{[On{pZz SI'9^}#%ձdp2X"3Re4z?*“H*i1R?#x [D:z\\7_K-MM?#0ؔ\^olUp6l%˲\n(lKY(!D9%$ɂTY*9BD c SBI1Q'a2dWbBe %*{-TPBe -TvyTZt2_]x&P 4{gZ[ӺdL*e[ōLqqlkǼ 4l>=XI#cX`r&6n>:_)i(r2BfWhTsN1@4Bn5a:3dY&ۭѮ8RK{R[Hm!.N(s; İ:U\^# _i/leRqg1ZfM(DӮk@~|ǀ 7P7;a1B 4ݜ;UAh$aAsDX6+A/ VPC,*[.n -Bp ]PKg Bh/~x.e< V!YW )Y*9@Nඉ z "ȏ9P1h#Oj@oUweY]m= nD4O0D@ɂ e 0\*  Dy-Cq ,f wU)]"މ+Bp -wyw:@>$'S:j׸`P콨S9e8D nQR-v@C ` (5Y/f BOd/{[ӓ1NdFw[5|Ы`1"=NM+6[{gOqf7/twד=8 =/`Żѣ b |7azpE?IVn6FYy)ɽ*eP9KL>[?2ُ?P,BE(P[}˘H<#)syt8] #yp\ǂfXY HE2 L ɖ܋,x-1ŠsR'F u?h/ܞrEO{ݳ!FM(=7~ D?;wE sgNeI'WDKNK@D-Qrs25SJ(Vu ddMh"h+Il^"Hg So-o*DL2<؊fɮV4&BOt^j 0mEf: pcz 6վdF=L)m1BF#iH,1_έP*ii$)J8F-hU먒fVq&Rd1MaTWw[&2۟)!DZxy e ^j08=sIn $B%IT:Mf[ŏL([1jCݢ=\gp'z Jq=uy@ Rz(`WVX6IZ<ƔLU\fC6r3}50_ ސD`׾u`4< {rQDgqw)Kd)o^z׿>8 9:$ৗ=ru{TY? JryjxR{z"B-cgvL0,'O:靼"s&)P⮹ ٻvi[\cR;ʿ;Yl*iD5p)'NPQxTb^8 BzAߊAxk ?˛= A \3ŴXtk XC:֒]9øRKAח#aSY ( 9p5޶.)g8!"ս ڌ3O)/N-wXʋXЖR!-bTewsށgiDrZ@93h$bMZE $Űs&Oа}Jr{ЧܺhMpiI7 (@^c'61jCIIoeAm6CeU)<]AB|ƒ *<ƒǃ x@5(4Umժ*ZcYP? Zf ^E$[=wf Jt&HG&H (^hD )52'W![P:Tx.c* *<ƒ Z Q. J(Fҝ3!T2Jz`JW'R)jƶ )&:[&c.iLC>DAoڸl!]uNwu{-%ڐrh (2@.j%>=mBwF9yYx߬5ʓz0Y`w{;y]gg>͛'?J!<;_g)vٴ3}M۰`̇ݞ?^~(B_wwBNoSu5ke-}r\o9$ULuTFg(ڷV"I-0Z\VDȠC6c*=\EsDi5)?I*XN.Ӧ`/oݚ+&T$Iw>O,'ȫ'VHPo*l_g^[ge?hᎃJKO/~ 3Xag-= >u203*ҧb3RuM"9i'^}x2Uēbt2I;jO#mXT[:~{ShH.rW1ԚXVt)5~D>u K܋`~F}(YPZ ] ] ] e$.P-FY⮇Љyg[n|M̧xx)]a20`\I|gmoUd$DSj\$彙 OCF *  5+}FfȲ8EQ3I+F;e.b}T>Yu{C#y_mx R:*û|01  Iw6Ulo-yR&kչ ?I`ǞO6!] P> `.dR$Mö-%7 BF $bTR Y/L0@^`|˫qU//V>O^}5sKD>|x.byiP_#^au%7ϟoZ6 /m }П,|ozA;S({dONOgpTNr/>yAWI`NOggk,uꏌrӓ7?\\nHsHT5͹W^~r׾i4GFHׇƸ nD\~ªơ8 1Y@-9o )q4c[/)T亘oUib-G%qtIco?|[%ctUDHmŕ姲ڜ/(rrHgRhv.yAlƧT3!( ¨WIqMU], 1 3{9;v 3Z:x w^ćGA# )3y+Jf$> O9yƷ#m }/$9$\^} l%5%#OD`-&(M$f( ecJ~/ P֍H:9=};&wg028?skH5lexBzY1F1VY} ^NⳮԄ%of*p1V1%T%#g8)ԴS3)Nhؤ l[lLR!T[k^sfyKnnS$u;&{ 3F']/ 7Od$8x{!{QM^'٪D^O%b`u>W 5kz)l6ȈtUմO/# މIX\}zvdbcc"`/dߑd-tB7#!}7{eTvM㔧9d5ꦛՑe;2`B_9 ZgVp3{mӜvl9 aoݬ/U"zddk掶nGi5t$HIS9NL2:|'шު_W*YWO$wPS>G}ьj-%:#lVrۑ6oj3;Oٿ=y}֜l.Iq 16n@UISC6hsmS)U6QZ(C?z4o} %2C:q\U D3:ۻlv9}Llݐ?\nPLY-vX6 /zPmUfǓRJH(5V$8tDP:hL?%Yf9>.ՑH>Wygd>ϯEҧ.=l#զpj`wdr6?x4W-)|mi`#!CXmbD\;)F'Ϳb딛i\%Zq_w~TIwe$$T5M^tR\>!]$ wSL.odBU5|ߙ-=k1 G|V<^RR\28?{㶕/;3Ҁ 6f ᡭD-jIUԍx1 &_ԽB짒j(p2}be*l)ΰه2 O;hI\rejZ|L2p=б#̀)~L x''2v51 ͶLHsk1T\Y20M(-U8 Z< m\Rߔ4?>Pg~ GMrቢJ(d=اD7`á;?n}sw`ל%|xy<{}󕟁~6{{2}K_zv&O7wt38Ry<侸0o 0ʟLuW?. Źwm2lb2&\yXg1 pRob3⨿_xkG~_M-" h4W<Pglk!g+rkvC[;p9n3܌KK7g>(-6׊"/4 ~K1Y|:t>}5U|jͅl䰝JyI_.Eo#2ndpswcYTQ_ѝX'w>c pJhdL#*’r'¢j4SU m?[J2g֥Yl}{%,5 \c"]*OlAil>E͓ ah?eVo >կ>d~3x-2y Vkvnif%30,Մ3cAHA1)Es xZPa&ro;Ƞ9a87(9p m(·(nyWJ#!=#T#j6#TscRѝ|#-CL DPI- =eJprl\I˥b)RY`Ւ8>wJ!3 MEł0X)GLREcTL=xOwI[3>iĬ +(N4!J׿08NCgט)ipН PG9Kd^C^c tJq!KOkbk,Jgb[:Ut"v^ئR;$XQ5 :̥2IJf[jc s+w;Z|e#SRBRhP{l3JCL珦Ng*2NH5D1yk9!F-]´B&TЂq ~8E{SK\wX,G(8)1N:ٜz\]bS=x"|J!$Tj'%)RK5VT(%;pբ T~$])윾} w9FaSL!^`p  r-:Ʊ(~"6Zא!B`}kh^{$H(Vl5S 6pxw4}4͵T`:#7VNvxCŏÜ~W!~X?EJLƿ85|%]ϳ(O ߓSrʐ ޤ\@eL }ώb|}\.?* J$M'Yy~+)9÷o߷R`uFO=D7x&¹zJ̼{F/W: O3=?\(~gólXk3&ο?ܩ4<$0 ֤$F#{~9U~{\/ڔBQ+rYab gŷƧa9pn3I6iP]4x4mKPGW: /+p$$߀ zQڰ*^WsRFY{׼ ݢx^(gAx 2b{s.7pp)U)>QΨ$akb%N0LH[>Fm>E>Uf KZ#l$v\5In5auzÝJѐm<1%~՜c*م+BUv-'nAzBiBb)E&*7y7Q͆ts@@ 0Tf8r_{=`|v~qK)^&)!TŰxEQޣG-z_ԢE)$Ky9ID6)8q˵Hޛ8vXύ^Ě+aDQ)).{`<2ޚ .OAOckBo[lb .)x;uciŁA ۯVl@QJ%J 鈹wKѨeaO1m&2GaEԂrLF"M\d9uIsq/xh\]Lf].>~.jkܣĴ7ęavUbe΍\Bi(;ҫpR\6?rUzH dH oåt`TKY^χrCkeH=Q´T )dkYkiXF`aIez U"ӲA m# “H%#ͭM{v\oUP!QqT(3Zb5lN6Ɋ03n9 ́{tQ Ɇ\ݞ+\-k:CyFTK Wt`( pƹP2\fLz  M- v)nTȫ'-&K)YUlg'p!)E`N]ϧ|E/ RXawKK" '2DW^B;-?!WTh)B5iprƉ  cwVH>>a᱂ GwXT[*OW4X%"}6zEؼt`와'lS2+E3P1%s[hAr1T.(  +4SQCnBDTRvЖ&@+QX! c!,]E]UuhsBkp . Xf/A/lC})^kS}Jz0քW=#7m䵴cS A7#W|좬L![(?ºatN>1}0Z#ПnjeӬyrd{;sM<Vf~i%*Y}j:{f2ܴw!x"$ "©H71YR@s ?%*g%noi}TfHN^mu^vlާ_&{4}A8YGЄx)r7Ch{Gz7g T'Oz;';I;Ժxk^ouAM&0?PI0IK emb8s]Ձ+ݾ11VA %e<jpVti< ,mWu$Me QUSKt \v5C^GR"mOyXjd]UneO炮Kv]]I媁za}8XNuvT_;KEOW 43\MjAM+: %fIɋc'obY"_on\~OX޻3eĒL.ESV4^g%)I"Ϭ$)!T=l#Do-nvp[T\^ehUOi,<.9v]s"HBZ2aRmn9VgMa8mBX]H46&9BiwǒDzqle ~Mt8{*C'Bv:NUBu-wxA0{4%hfZ 7W4ZިCf8J@cj0DUL˗+< ?ǑIFhX/m]̘k5cT F/owXbnC䦭ųSS`08RJTjӧܜ]AbM6Fhջ _s';elEΧ2'[K{t)nZ;p7JK8ײw niĶƥ@mMe xJd]o-\z0$qwGaz60ajOu,ZWѫ=8kEtJwLn[0^:>plg\AӚ\AG"oԵ _]Rcx} $bK]祸NO?EdsB7\(}-~=deE9GJK|?=(S@ " a IHHJk$ 8SQ>&uG:IIdN= 7Q,!yH<Ocmd0A®0Ȫ&'_X-iC|W7 = ADH #lM+> 9syְ_/dEeZjďrxB%ӪW}|Ά֕ _Y<Ф%v0'*ASL"u%9a9haNuTV\+ 8kUBpҚ+3$-2KUɠXg~Bhۺ,C%g]W*(.xKJm|֒Zږk=Fk@t_ E8F \`2[Upy2TLɺ[T1Qd5BOUZFή ] ûe)-e )ޭ>Ji>lkpv5?Z VrO/'iiඇ IJ&:pQ8={Jٰ&T {/r):aY[{rCh{Vٛk)XaNZUoDڶLM]kmX_E.z´? nNc$m@ wYREɩkYR)[e[bf83;OM'_`NiSdcw"YABډOLb=joonwN24]7-(i:eދȰ|'XHIJ!rRT)88@qb&MɜFM/+PIn\:O*Э(!y%\&,Ӥ_^1m TQ$HdFFSd%t5m* L+[N4h[{ﹰ kpæ}ƢХ}83֓H2;$D h{/p-9;vU_+G H3M.Ps(p׏I"U2c&b,Aq -H #ll iN"IP#v2Q2q#JrǕf^O{wMhLcV$T+`. @+(DºH]Fj&UJՙY/RD%jF /`rQúqo-O6L4ZJ-`]`<׏ I 8HHkp;@;Ic%RB?fQ"D}Q蹔tuxzWo|a@^cVYt;ַxݍw3(ō<5F7G.?Ukx|aēb7S>~lsrolaSAqqm "CH80̒H!RHk*.e*.À:56λY_8rW9a@#X^j)B?٦Ls7-s#q-js/7ndLk)Ej;^BZjhiV:.?>,DԷ=lAbM袺X)(o[\Fj]ݟZ {2;+Q/jr|Ø*4Jds ͻ^e7 U7ArR& fkXQ^֒s p!8D&(`S0! IyU# o Xߺt/.8end\>h)䗫~<+eo(t<Vhx"){cKq۳;m,'U[GwjHf{{݇o5w\dQkx<ᖽF*?BT%kx QBBaɚ[Qwoa-uX(޵U|O܉ c*hg$}'|K5{Y;1pnaĸuHY9MZ>=Lnp]¯${cOO)| HcD]IveS-*(o؏$P T0.})WgC, \" }C95H?I'n֎AUϏgoO~[!<*G~2:`[ @Nv /}i7$k3I2AoK ׳k//x?u6E'qz̏޹]\odrs ϵ޽&مkNn2 N|VBvпLSw-M|~~e%x8 3s8{&nPz 4e >Cu?^Oe8>-:hӣ~nfW*Ă ~=x@</,_z\~~_$@19]MnyՒoo$φ\u^ǿT:*P^/sCօ(~xv‚xpo7o>»J{ld0S6H;?cW O8K TrG^/NSr~~M?ʊ&cPvKZ ^#(C߮F.'.y?shb|1/_}?vvrڲNfG2t4Ӏ_W;e&Xϯ}MGEJ5Jz2nV.{×;UdO:T k $Sw^- q0y␃"bef?uoN >potbl@5٢/hW㉃2s2aNC 6@?{f|I&[Y+bXB]ߏ.JB~ 1޿W= .>C\>EsguO~UsB.dw5/QPڻſJK99wޕݕ,2Ql'#_ZS#lDiiqK, d"|Z/dmV8e8~k0Q wa+.T}(RݸE j(F[_‰Idj2Oa [|u.Oq"uD!v#P7br-bB5ay, E!ac\05DE:D!xcl"NqdLZvrG2 KP>Z ,E݋&YI>Vn|^fw7a*-i~iPwӯcʐg姺ˌ?kژO͖0][O*sە1KwG5*)0aY;8[Qqm#WOvpZ/?uf  'pgۯOڝr_AOZ@ԉ :h.G6>爌BbNJk)CT X) q)JHȨX_@۹ ^@cN@%֗2Qd[(q8J*fΩ04' c$ \UaƂ\&s$x}/Nq.sz>KY~0(szj-)|&\Z)1,>_&[y]ŕԁPڣG@qIs9R=dHc8jbiƅ6dQTV^VKGT=B%* ^pgӿ ?~?Յt)TDh K*f0룒R!xp Swnq>랗}5zٱ76-79nga$:5#R.Ѭvׂz25Ty9>>-"uwXGO{Sͤh& dq҄1uh ]'DCۡy~o>Uߪ 8PHp$c8B1:s_r(VJoU ṘH-\v(t]գ6ued1B-ɇ䑈$ bc t!0!qCc{QjQ n&]Ift#BM,,ܤʰP \0] ɜ$6<2JGՆ*Gƿ)%%nίɳ&P#Ű!']`lv 6"6Q#N9,8:faL#uX"\J"\[!K'} a O$URȨ .6݈Ȟ> bT>ғu)=ϒ@]}Lg3|'XΠ1>C 2F .m W kd/0n@ nӼߝMߟGMV3xޟuòJ YQN)^xE=L6phL[|@y9G Y8+yu~߭J;|s jQ>}uL m^Tk|@KyvF\iչߧA8bD15xKjykO/lri?mϮZ]DrԤQ\Q=YCJm9uDjFuU5ݶ&zT$?J$i*Q:]Tu}L pHu^MC,L68G/yqf !yrDZEBhq`h *Yʹt 2vu&Qj#h oSTNj,CU52yB }ErIIq_sA̠;+vMe'uLkn[SʋU>"@AmB_ i0̹Rse'T$*J1HR=(V3²:Zgʀ\-UQARŢUQB[|<:^Ry -IkNrGQ%cJv$p-PQtEF(bq 0g=SSzF˜Ɋn4B0>CNT*`(ԛUm[GtPJ3$@jthضė3fgJ^F?T4ߪ7²Y`'(a_3l ;fSҍϊwzۻճ}W={Q9TL4vWԁaؕGwL=Q5%;e|D0eŕ(b3_l](xw> ΚMV[MkW\5?9f wB_ ^_\:-e@ !)e!BXEk)@qZXy蒄́Zi E9,G(}HF%Xͬx=nK)SToeȟwa8aH8)Ӿāz9dEsϠ PGdr!%e:-pY?gBRs,iud1UJ gxpJ\FeJYL2|*5/.j]LS:كj \ Tbz!G۷9|ޚ V.@Y6gFP6W_Mߑxs\n f^{= 8l= 8͹Wd 4 Gvv. =vqm1: T2>ujp\.L^R`̨oy)Z@ё&>G$LP=޽{S@r^t9@6>PpE<Di,pRiPJkQH" ,B& ZVwk] H%7 *EAv6 m@0!%ALj"1XݬDV:o,iK 9Esm3 ) Z(&vÝ,p0J>P{nN)1m1fIiUhIT{\U206I1QBBz Jf#[o3(l64Te2ΑK?ObuQdmӣ~a1r.Yu_,턒)K=SCI~>%&nx q.C/ءZXV#BeSM"G[F]݉) _DTxB$sh =a u/t,c#roVD0Ԃks4;F])5d 3Rp/`$^FBt.MHK$H ;) $|?ASECׄqf$#5ez RPvg˴/gv%hiJ(.g*yƏ-5{l iPֵ*GMB7}XNb1BvX/勃_z`[xKn<2zW$vkIjrȡtfLs TCN\.GhcAH >(JtJI)jR97{ǔf#pPW,^4{^y\_/z,f4@N\bz$l3ȃk*`]QCO:ތ>i'W bFtЩ ^Qh9O)9\ odr.~LZWV-Ma+lurio.}J^&)`mҎ&7U|كdAj5y$S#NpíM_ o|d+aFwѐi*{ G]Hք#&oF *y($XFGdk"'3GGsaUNʦ6FGUh#aJ[PjKJy9%0^{TeBR@oXC.|v\O$T 9-sf[_z$Iu (w]z)h~FE76j7g'BYXF>bUJȟޕŝ-7s }. w|72A;㨙$gϮaf:`ϰ-p.3o|<"^ᬂZ, ~J -{28R xG#'uM*<<qt2O#oXu\Sm]o0[:pO ƁupD\Q9̟˝h#h,.5jz(ߢE5 9rg)]YoIr+^l(M0,1CCbShݑͫ;BTݙQŕGD!>ʹxY{0Wj\! WBmzܖXCϮ?D}6VdaHJ {BDkt kڣnY<8>( <)"ZӦd I7F Եb΄t ;]@uAaxoğТx"9bU OwH S-*yL D-=xM5BgU; R@iyGMMJ?'BɆBR(c{nQʼn[jqyLJgp{ȁ NԏR-TԬq'mA)DV`dc[Ǥ@/RuVv>ZP? 3dYjG)楂TEު(RAHTS q9IOHQ0]#Rm4(m#Ъ V2[: ͖Ts-@]nڵhK+)QfK͕4# [qs%TعT~ĶMAkgeu˖"㪭ƴ`rGS/5U(%kaBU!u*Eȶ'Si<Ę $tXk & - -l~hVv=lw,VLĢC3U3f'fR\vRP9(3BsbW(3J -ugż1)RQWLޕ]3Z0Ԭ3:⸢Ks=N4/U^!fZd3;j4HbxiUeGs/bvUcf!bF@ _[iQF%i-iOE8kNx|X}_ٷLd ćBߞJ/ٞggrn;݅"3VR m9Xfolp֋>ve(ޞle#€tݭD(o"bi*Wu7NFjyb5j%L,q>3J/3R~^ޚ9-=p=dʽM9TIz  N|T :zAdV&_qWWTIJI]c 7پ%R0ӈ $d.¦ ,ӨsS4u@zm:Cc8DoN1BoPÉC4>b@{*#=R`Kv6r[)݈\BX,Lpx9;r3Y (H ܫ-~C/f&vl|_{+`3GCCQwdC4 h +/lU=aF-S2J3B{n=\B:k8Zc,fF 67?݌RvWkIK𠣸<5ueSQAERIEƴ*sRŇ nrFEvf "Ju8; /YV0y8Pf /}=u4oۼy/g?])qlz{4 <>>Iءj]S-0& y'jG)"1DēF-CFɱM4RSBG?XJ0ƪʮa헏?{04Ѩ-@jMc1>ĵ} ;)4zAgA5="~L*jHA KU0TV^Gw~eK&YiV]|.ä#*A`tx2Ҫfx:whis6{*V6m T PP:- FUcA56t7:5J!|g|r] +tguD\{WLSd˵1VN)h!䔶&dH3vdaQ)w5iCZu J3Z-·b/LBh*RQ[sUhtɈOtc[qU*euYBwdekޜW_\\^߄H5$K;ywuCի37Gv-wZ2"qƄ25g<07o4a7c>\xujI^\^_3xu~z9jyzvTXӜ2w[(VY~Qx?O_^ߠdސ_TVNԞ^ ,vzDxu}Nch(l]M|% Nw7@YY,G XK(ΛӋ*V,m; t;u~#WcѨgs{?qaj*SDRuSQcwb'4ߧ#z (S4A蝹tdP 9^48a@X,сL/hNBfmƊWN|wZn`OaS='$ax2BD7r;=ۙ`bBjk8 ki vWB%z\.dtF%NZv=Z+# δ\#&_ `]GLc%o ٮ.6m=zirrJ7^s j{0Bx[I>ξ|=$s1rK#)L$vxYs2|mRfj"EPp R=>1Pp߾M;C0zRdjr] Vj=Hv fϩSsemI9[T__'82ķ uj}e˖K[vڮkٛ؞uN8ji\,YR TТQXgiA16i^x$dRd(t(~Q馱IFh#FAZ(A5cA:Py4"(+\^PRoBlKUZ "xR&ܓeaj  묐zfK.HQhƠǢmb-2llR8 UˋJHUc6U J%Cƚ xxYDl-7m->7zS["z>gA+m̱psQ[xT77R~2|V7o,̑vZ+2%W\Qp8 l{()ioۉxȐ?uw 8l,?8~٘Q9D-(*ɬc9>- ΂zNA c=$Je '@%MRtB.M֨HC,ͫkAd}pfNF[Vz%OYa:bʨs^؞vKo!V*F8VʲR1 +q"8y7 4FrɊFy7Ł'MR~s y9GLݖGnlJ^S"n~1,wf8g׿?.䘑p}7}~'`&Ӄym f\-`zy?^_Q}U4g wa'|^_&/72r$^G@rJ py$5L$z41|:f=Bn49lvS$` !UuMCl4  uoLX}B_'w/h:e,(R2:i1NP ,^ŬObdb`#g. Ƭ9'UPf<[TTB9vX8Uۯ_°NVIit,g~"߉+V{\(ߎdj)uvrLI"`pA&<~,k 2aAת:~\Q*Sgҳb\?N;cIԑtkҵV|(cg[Wuf>Nrz\me0 9EP5-韚4xm#j0 I#¹(?W1HDѣu=w0[6%awk-+dZ(goq臫C2Z%3sڟ-UthS{_ E؂2n=}+Mj^"k8eM'˜7k^4?4Cޞ_ݴEo?b(ߌw4}B.8A$o|t^B%ov$)) v$ۧ(D9Eԑ@&:Asd?tכICӲ?Yq]ܲo7q\d}8+{Ι܍WLY3mSGL0t{$䬼Z>,=iNf: n._~0w '0{D#D[Gi`|^djg5>%;/GvI"aVS&Dnݑ(5 "W$j|$ ;u֚yV$j|$T>܅6D"u0U#p=Ϧs0TNu5x ryVw=.փkA MU]@i0ʤ:CB3N!''r8k} hpYht!Oe'C:J<ld3M(G 2JG O!BBBrFΥʠ^CgPxH֧y?WhVOېiņ;M8?M[N4S (dK0cӲ^2NUO@$Di;zxhWp:М{N;Ѝ|?H?H9QgEHh Nj5`ˤZCΛ;z*s' !`x?| b$R0L*a@|Q9=GU 84XZcV{'#Q2E |rRI]>l3ѲA7ֈB\'blRݜU1K.'C= gR_QD THafNfj;>ܺS\?#w|ͪbwnVXN}ற&JjN# 9/k} , VX2:[U\-&WpaQIp`Q-*f4د-(RYvq`+V8ki`3jd:#R@,wNE@^-\l$X~&U剗*",&%)*0`+@eG t\qPPl tvESᷘ @H@ʶZEia+He rr"1 G`[@1 N`%ʆ]]t#a>b]sU4uJG+D$؊ZJx,znݏ|1X[l5Wa~ẁ7wۻ/WNC.'.!]/ct7_~ *a%zO' 'S- bm'*((]llYg5΍YؘHZ̘hqE "dZ0 GZI1ɉ鱒?q#RInGh i`W8(.+ot*8q)1R3.;a9]``"iTp5FAa3x" #_+\Go8fa-?IH샸4>.R1.40dtǼZ"'x`2j$#DCDs-sQ!) ܨF' lZt 652@W`RL7=`Wa xňgW Y:sbHt ,+AaL#9 38x0'A-/ 8nϷJ`׆q4u{bʙ/]el֕zoװ_"~n]a3:aȪȪȪȪ"k3PPjuFěXRSێF%4 Fr:j9!a\d ="ڔ=!d*G~ Uҏ~) U֞pdqr @:(K+`i#@㬬*YUd5VMqqJ na&Terp:2 ˨-38@VWZXAQSr5sPI"f¨1@'8 J5V =#w6<,W8 ;츀0#@戧1yLnV *NЅgE<#"-Byږv5do+bZ%u$7/x#ւⰓÛUx5)ZjP l'6\fb>u\+B 7LvO&AcEx(䛡SGbDomGM3ɘf[QSq ف]1,S:JTV]%ʤ@)?L%(vp,zϻ0G&bn4he)O82<v1>`ƂE_ p4:TNr Wղ?[ p,c?(A{ HTxwaXmPç7獵O]:p oFw-'[cۯoXj;s;ڹF |t1o>? YCd,)OpkR ZZ7]=ͬ1f*i 06O,gTp7_n*,d˂z)KdOF[„)eD4k!?^?y$^ŏ0,7Gk`L׾9owtsVp.8yoW7oUAun>*}p:XšݺK.mq>|vI.NR?c\-6~b_%U69O/7ӟodDS&6_,Z$9o`.Ӥ-!x=F4__7o΢yWR}'ţu@y`6aZ\jS{ZB0dQwBCQ%M,xT.-xep 9ȕVu@rעs)N (ua?m L9aIF]rK/:ݼPp"&N iR.q%/,5E~bƁj8rIс2*mAT2 eɌXЊy% I'ܼ䯲.XG 0v*RBfV^P%.fe BOyiOOh0޷u'0eٕ$7g 001V ,T#7$ K=l8.tZęN{ @Ej}푟7!9QC5 s%Zڑ3 ~mRO 9xB(^?P#dV aLrt}Ucl{ZӔ3kǜ3Q4Y[,Zv<¸3&O1u,J*u5**yF6Hz4FNHY)x8m2e4)*u53FQba2g= cҎzYRiѶjjɡ_ЉcÐ%}+nt]6*):9W?p1[<jJ]/HN<'_/TV o?/7h?~ Y\]RGgADQR}|5z)UMI&2+. ^QS(R=u$VD$QZ|`/yTj}tt7_^3JPP )7K]DGeA\{^Rftt ~R)~产_, cXCo*Y4Ui Ga>2s,l*k7W'5;ߌ:y;v8NN80 iys ߒm.|AڠP^IJoK".ul#?1jMMG*ߜyOM<qӯ?_#X ޤ'|*0 k)7_#OcxHWy7WԄ=gakX˵Ρe32HY)Ydޣ,Yd[XMPUb xkuGaK$8˓H<{N%IJ/N0CA)hk _ {?_0‡[|q^=!%sm>3 e.ːgqףH!e4XhpD,H_/I"/E\S.:W7j൴3!"#l%% ez"uŷ {˻<FK 9k u-t5*wx{u+h~BG"q&$uVufBj JB{;Nr͹: ̞ |/tݬVaym<-Vw[}??NBgͯ7͟gs_ѓ=Dh8TrICrl~zra6DPT/|$&P]tƨ-8juW,e-7wݙZ<<]Fb zveo8h-u/ -{|.W죣!9M68K=Cu򬮔sܪɼ6&tl̳U]'* 6[ h!}"~̊ص$ox6**oG}\iX@0Vi$LIgϏk/a[ʓz/m;)j}iնC2Ի}L2&##rثkj2IwzBk+}i=HQ1^ RP2eU5&) FTg5jg7zf_F'z'SyIߢsI鍥ycЅ.KazXRwzmm.h%Vik21@1-myl[s&d4wROED`lK+RXGd(GJwtbc.V>B4A|gUiA785*TufT,m?ps RF bAn&Od^F:\%[vXExaBx21M2?Q{\=hB05\tHEU'U {0?`|SBҚ"L X&l{°xk>#G,߾O?H՞w|3 cc׈1KڻD1UaץV:HN~])daT[MkJ|#u6j쯊6mbΘ ̉yK ZBmv& B<êR kΩ2{_~ZȔ< q!<& ne=$[k Yϒc$gm34Ĝ"3@rB`CCojZ<8?7$EY:ʕ,\4:cBx-H17޾_5gcZ0*)} R]Wv-A40\k!;7n'IL)cHA <$f5SJtl1'xy0> '#(ҸP85 /53$\ 7Px2K=`}胠jF1mJ[21'm%f-ѧ<[1v[Ncnҁ$& Ea0x,"~D>D.T`l%#ncϔ`VA_z%LT 9ad΅͢w)YFJ!obi|mm0v1gi+8mc]{ Y,Z%)V2^s$5օ׾Ɯ h9g Xn+^ycs0XkG"D#A[h,m/] 9,%JھJeʥnnnos'{Pm-φٽ Sfqv?}$)KhRw[ 8o>zʰ„UJ~K& H̴GUM@,ސqc^gUm53c쳹I#MW \Lyz[ -pj% CUh!Q,r)).P3Q@LNu[:>xr3=ʶ)g5Ϙ*P1k!-*oXh2Rփ`#ÅFy4H7M跜&57]r$0eD}-&AvamӘ`O{. 0 Ͻ C.FJLyjԤw] e6]o4mijWjgw!ij U(Hcs'뻷\ Q l}pZmZ/rV+9)=;Vӱ֕Uzc~qc9gEp2oMX"^.fPSعz{~id+--_.x|Ep-j vIēx9x)}3 ؿ(?<dILlEEPLjijiO%JяB݇oJk$͈@ nj j[ Z3IاwZHy;NrE6x{[%9Eю~RU%5wZiѭś>j5e%\sRl&cs*a;~y4[^) fhĤxm?1 ]%jMLCm,MhExgDSx ) \rɳ\2惰xgw`а<߹@]P,f/NlF1-օN(D_IHYr`$B+<>/ML= *E&HbMA,JrGTB{U -a wtRئ>iF qް}>HD-/շ`B!gB`;eX/ hӅ/qӶd5>S7>zg__>Ɗ؅< 3%%dxȴ$Ucjt!twWx;[Qws[}X,=}O⚄Gp[h6_ξYnOϳm|_oЂtV)~w}nǃcxL |zPi37F 4UEI2%ʬw>cliUE@>Er>a .`)ˠJ:|˥Hb%{8d(l| .u@ `"!;pI._24J1Q{q4|0555ٶ.P>2¾t54~uSB{$r9a#o_lo=+ɹp 6 .Ie,>9lEV> iLݡmQLk!ZmP*U%R3[KəhKEK,u]K(h!,4DY))EDb1I{DbaHt='+-즠]+aoY}RầTP*BV{ Gc,ҽ{<JV]oNNLpkXaG;B0ͻ4foBD|QU@Z`gs@N s޵!Jv4j^s0خT ~) %⬤Q̋N쫻NbO57wrq2dx3%&{1A@H5OZ Hj)!ahBJ tI @,AG!}Mr\uW.ZhLX^YT^RT~U G귭¥ PCe[̲6Й [Jq%}˻HDyc!?DxV>g<"˽w ˊG f=3e޸Ͻro}Ĕ4~MU ٶ BZQRZq@rrVZY E5W^0r{B};ˆFTX)KR(s.jI6.YYI) - aL0>%|,g;PŦ!51*p\ކ75p`7ʊ_[k; (w})y1[0bԂ-6bFYg`3\Y S b·v- fw.SRy?☥iŽZs;4}+M:j2%y#rytnQ+~-4Q6;!f5[lQI1>\V YӖ`Ql=E,F/1Fˍg"(dimrBOb /.ӡ#f y @{\PmB9rV)[eJ}Q[٣ҘPzgfB }'-ܥ&|y`@c_'T=j"" e \tWD_Ԉ &-{}rZfj&%[gMwg!M75aJi&73g/vu&vkΑn{[5V'6 CZ›`ܺ:rCې|J22DڔlVAxaat_B֚49tSY/+>@H& !yIG1"s,B-8b?(m808Q)FBݮ %l\b;S[yk,;Z}\?n[ c㇙`m]$0'g<ᏤӍͰ3eWTƁ߹K֌ih>AI2;0\I#5ѭ,OO(L8$1ӚD1OϪ,B2>W{ GR1E'-V{}IcX s MBbu#%d;S-滞F)^ A5Y@=QH1MaPCLTcTOs"u۳VʿB2~h~WRFFE$"0LT B=3w& 5xYIB^A ͜.-SR]ZWT%,?UvcR\;(O&,rw2; Quc.[^\Ę-S;UΚ&%\\{;a,6z Q} NF9;/ivW2q`k!~#X:@I zD 8ٝ&~;8u/;W0s<>EQgCCsLZcLT M=kq9y3:u.veDU 8hƏ ?Nϭ&`o>r8^H|$j;i|G%^ z% 7#[ܬEf8 ߥ 608c%&{1A@ 5gh<;oII>ɁBX#wʓ^f'K~?O!Wr0(/ct'2Í= m<Y6vK7woX-@4)E~dL[;?,R9r5^L>%Z&q6ga?AP_t> a??{az4< Bz~8? cp O| .}?UOxI7Os|O{׶;^/vcǿթ{ѫ_u͇[xǷ}.;~ݽ/7MC2zrΕIn}Nj3q5"w8if}9hПJyFE.0W! ?Ƈކ~h2/ `>s{Fz~v Dgَ&7pYqHN қ8װ~{oq?)7 Th*7oزT:Cv2|YW'/Rci ^ֆ(Ҋ$|Fʣ~jq<|ENA :W)iǏMUQR-a*nL?t|@_-> l[ k0#+VaVf!e׃+˼ ~?w M4j,]TXY2rpikW }VjGq%J{ {6CT{-6.4p]mki& ^[+mr> {_n50eabɐ.Ng8~^e"tNzgq?NWIF:{uX$Zbq] ^쎍6+!5n-jo ]f<6Vho^Yv Hc/ KB5t4}:I"|傮 <(BRںl"R+Ԛ޸].CtJTsEی& ymΕs2fiȶ{TDs5)\K ۦG,kɭn j 6Q8{iћ3ڋ ܸG2ç7[f{$ (\. (w{7pkopq/}aBGM.%ˏ-)kGy'@[ F`i6&UnjcCr#,*l=u^ͥ*gwdqV\"xVSp.C]zY\.w1r,y6\ Uu}kŁ냳ȽM0}Mح(B o(Jje=ʽL* eO F/B2>YlyݑYC' ]=F=hӯ^`GcA&O΀?ug3O37>DR LREK0 C/}k*dhsB2Wt[}; D݂wI 16kP &]Ufuk]-K.rvĚ,a_n2+)+; i7?Jj%-*Ptz9hRw>Y^RDZxDacDD1 jK4gaka(j*tڸ5r% *FH1 sB0Rbbp$m֒* ͞p+hP 8ds* }bmO"Qh"C8Oj">[M0dPZr)em} 'sss˛2@58uHcqlC0 Sg h<:[c5{*C%Aխؖ,ߒ`jom45# $p?EM $-B8WX?[Fice|L"Ne- 6(ꦱ[#^nT;RqniSV۶µ+\ՄᖔIuIJeB

qZ Kf44׉|k@0 O\Y_b">cDK;2G?_^ňCFÀO1^A9eޑSŹ9 H`&yɄmVX|RNB%xF>ώ(f#n*l)N48x!c6I0(L !sF | .13#S+ RS2G޵Fr#EЗ w \`gK.\)'ieֻ {3z!{ xWN7WbUHt2xoy7<ߛxMᒫ`HtkcUthYv8GD<'{Fg'#/:}yo( \浐m-.5d4Sqft4JSQty'kieȲՌq@l<x Ε=6-T໫hy? *(pDpƘ n>!&pns<}#9k#UaվT3!!,%ΰ_kb|Ui%zPRRr/"*Ihd@iT zV"L<"K2KyiU\`PA`rf49l ¹H>{blrcشw?&^fI1;G) i" GX*G扣' vge×R{sJ Κ>AVϹ5)$טNcA %RЖ6[kRP,0fJX&ub<@%Nx֣9l5)wǗS[+*CW_/ LZG8G: h4.wTUhp% BI Ӗ1 0hàM60z#Y"8͛( 6jrG'ha ^rh'cf)h+xVZՈVp}8v 0`6pekѲq1Ztrpɢ\U)%*f6r#BeO!R@RѣҊ70̪"AsTz \o-j$Gmg͡@Ir : <,]:CswBir\aY/ל`c$si@>堗4 Y%N'fX bh咫ƯBp4oU NIðcX2蟡6Q Q>B1Z$Aϛ?nS'9=hθ:Fk/EmR3=*xZ ʉW!ZG Bxn>ϧ'ߝV;!hߊVnc/:*:$(/i,#xPV_}:A.Aqp^"w`9zk.{wTngC\P+W4P  cfb 3zrfPPQ8#RPZ ZcX7H$}{hPDnF*awmvX؟)uĘĔn*=)c箩6tZ{5ؤR}{(:so&'"$%ڻ[ŧ}ߔq>:!%j9!ewn5*9}zeBO%}r$9GKтlx-'ܙ] 6xjZv+Тᵐt6{ݝ*%^d.(q> X! {)a%g.R͇^Vr:VuDmxǒ$025%. )AzYChhzԟ4pCi% ~Kw>*]{Y2xSn,X{_]vcfYv/gٵ:RoT<|B&L{ycY/Z=Z}s[ n'f4,O,3tr.tY7Y7ѓkӘ>&ci5G9=?985 }RӘ(eQP'*Cȥ@YI9)ni.Q',)dTJtB 8y&v4O.oi\GZK]qpI#/%ND (Kh!&<*Ye) s)Yoh`SNC"N}V'Qou Vq^M] q# TV;NCD9圫!PnGz ށLБo5u)92n?ƶhZ$40WSWA6Nw ml1 1R3~f8b ۩lR*g^T;2,V)%;s>K,^ L/G_I/z @j?Eb}J4C_甯UD/8 Y8eWǵw<<_4rpF&\I <;T<ݠhx%.q-BN`KZ/$HzJɔU";lVvݣF1Z 10I'1DGlҞVUp&N1*8o3SQa;> YcĨ&kTHz=#$K R6S;F1a\z@e_ʢT {t1^tWc[f9ajOg]:XW2x, &,X..ی|= pJThbnFO~pTLX?[͞ӘF*9aQycbnFO@9F2$@'=EL7*8lΰXQ!/IwFG4fTpԬFM.D18?/C;|;ߝ"ֹܫwթPMV˹\s&>ղEݾZCv!^VAuEeoV&ؐBNFKf$0\T&h$Pt=2jIqTDr@2 t[!3U>:;$c2rd∓bO(0RqkK ZQ RVegB[7D&"~%F=RQ56SXuhbKps4pף:>H #>y"`N!pm}Z3ď22o*pƎMTXg2*܊$68.\J`u@uJ&PGBbRYiQVA)FK" p7 0M|Y|{~e%6*=> 7ⵊ@/?~:3ATg̍bf~w ?{{}[$Ow'nu>_]]|y~f8AR>~'4G&9ӓ+˛kLN[2D?~Ge=n- |q*$qEH{//~#{&g-;BYgRJ:IuJU&RGJ1 3TLAp}JBtp8E!`Lfc,7@kOqD$WF0kWb`<5 )LR 7Xd` wJz)sV@$ ^yGaU֠ 2*J8\u ylh RHAqidM`4P. JwKY0o_T@;c~ Ж|TBhkN8[ A9єH,p謼4uG ԱƜKi>{hip F1V!J.U%`CUN *+%Xj]CHRELZ}d4ZZCm)ܤ\2y\B׍amY$񿣋FPz= 5$$H1k\D5 "?OF-AozgOMp.hwhdpmf]xY;w5k:a__M&@Q,Vˏ7׸xf#a%%eV[]pshZ)34+A{%eW7fCݶ0=LifRkk߯^3coZgx30B Ôe3**>YWDN3){XkEeAө6UJv P/DcG'*+1Ɉ6;|ds7KKsб9nPa'4 p:(KZOAcLa4YVl<x3){X$Đ,*x2#f%hYL`g(B a6g{8̘5ً!HJmaAR=m ;l@DjrnܪZ.[z8a|@1("\5b|A .eDQ3Qd6N6S9SiIIDL3~_-%j{n[..>J;<ſhNnS1Yi*F/Cc-3lDlРP\a!}r6 qoUAqGoEc6#X2J޵nA垡3r^aYя3bӶ][oH+L]IV4 *UU$v'_ʎm\KG#n_$"Y$痏 rO#c77C6| M`+cU9!0H2UhMflۜ\{5_d)#YO^)xq}{ohZ{ H7) NSgj}QGwS5΁oP G0 2Iud73z_`fbc1Y {}oziso²37R6#b0 uтZϱWI#Ƞ*J)^mrH!YB*nԠ5.|--juC\Wwp~Yҿ>goWput]><[X5(@*EZh"Ԫ A**yEI`5]E<.Bfhv!Vh)Y2dPSi* *RU(:a,}fknbk[*4B緱U6#@a-,'-dv[("kb LBv[fLWd-EvMl 4#b(~9ۋsCdz`:0?X&ULJ>RhIfȩs GTh{ 3^39!.7ˁk;Z4ڗ0N_+]6{VIQրKa܄w$J6}ڛ%nC7Kdi`H t$xpD8;-$Gg_4O>_~;RjAH[v3yvuse2Ssǽ3Bl)ZpwC|~Q=[H@Xro~}HvYO~MѴE+}NjDF#BtaŠZʃE6|xю7(q@; #hPHFk0ZmE5rK  ep`I;/P >]o1`CP굊(y%]|U]l2Vso 6ү!'w:IH' 36E.+(Uᄤ"DwEs7^ o98/׋`]nKS+R $'{2P#=O ]og7J(koyԷ򈍫"u'cn;D &Ʋ z)bL5uxKξeSH 'rcly-]+]U\O)>Ϯn,O3>|~Z :_@㹓_JBrϝ>VMWfnS4{Ӥt9?@`u%"3p;&XR͑ ;cPzVl"\#VFΙgo_n\sz\T$`/ KC(=޳2 4ҩJTB%! Z'6 FגW1acJ^ GkRw7 ;97# MiDkB54/į_BmP| V] a#5f(ra]ԓb5NM+03V4r=%z(9K//Iɝ_-rb?z_ @e;NDz[Z-Qx;+`InùV0.k2(-4ޖiPQIFCh}w~]{aI=_qQֽ zL:{K$yϭ5iftFJm*rԫv:yf2QzS 3+jp =SkƬhvDU=Y\3/2d01o=INqvvv>d߆6*wjIg}ةL7zut`$}k+C,L`8+/#!5*vCφZq[p _%i%f; 3U4rƙ/d͞bpO? I鏏:@IRlwC]B+(B"4_s4Oo]v v*["4(A!h}@T>nT [󂍶o MPK戹<}?d-aw.͸.-k=vt͒#?7$ƺT&f1mމ֍}/t#F ; _,q{&h?_<;!$e05˩^a/s\yO]Jo}/˙whTGAjd1Jxr'H +d٧ 6;0iѽgҜUXY>%CA8 1NEn Y2`xck_+K*.J0REZWZJ:w˪m,Oձ7 SyGp}%g Kkn)|bo󆍤]œumj+-'9V5u Yf4 6u]0neufz]+V8GYFagۣY∅v>nj34 & dʁkGޢffx ke f:f[.bZY{ o-wuAa-F>4F }ب1}D[OVܙBd, @*.1~u6Adpj]|=u$m|]nCM "՟U<\I*FW5[T"Q%%}i]U=^=ό jbvSgԶ4ENA?r6+WRa=Gt%AeM+N's6[/˙b^PG!c줭G<^vN(Rh4|⠇SJ1CIx8luWRC`r`U[0Rx_JyTTt@dlU{͉gYJuY&H $JRa.C^Y@{yyYQE)J!ȎVUSS"k@]^ɐuD@h<cX_`JZ|A=R pFBYkU!HU녕삜b]bHe{L[?;f Qsh(Mu`G+,Pɗ}AV l s>َQ"bgːِ wXϢe;KWcsb2^4ƺ(l(I4 | ֏ԏj*xOP#>U到 y']IO@[+mh_E>!~?Mc#vZM`XZjuEA;KR"MꪝÖx,g3;;3J<׈CV?<|~;e} 6=qp'[zl"#aj퇮L44 42lXoW るr<]\ir—\ O|3b71Ҙ 1b$ ; g[M<ǃ;2=7FV.w(XWׂq D0HWo/&w"%|=sA7q64v:AF4vf/}>Q:/ǝNw{R؉jthc:^Oqz?zKb&0 bvw{X1AHmN`\Ҧ !5)6:](\/rmfY#=nv<ØwB"teϙ‚B5_crNa)Ymeq"*Wıg.I)-ƸXD:&%"u aeoc ހPXMQRԇUOaw%1! $ ]q C̕B)WE)KY:«;OZSS*c!`,<*Lp]#IBqPw$uL>Unݐ!E3G)",ob @5aXqڞbLʽi\Dr8 ))(֢>aQ)l!]S{MQ,m㳎q{ l c^X ØgsR?sǝw>j\t^"*|>#iaUek"KShOqeΨu_IT.2]}k,)BυaM+.q=-%ksH-`|u{m2K20tC{TlGh :L4zV,Mt +:hWfTs}۳uK3 /u*( NYY)gKt]P.dJؖ2/'- ;OwPjgv"BcHv+~ÍtC-`=zgFihV/AΫVf'QsEOb=JcGqW"MK&2X&9OPJ]RZG.x2HS@:(܎Y&YlpŌ5~qpsq"OC@o/NX+*[2.WYXp\XY0@^*q.sWq%AE(ÿHT>&t?l1XU,omџRl ѽ#7-\[gf8f‹bS,xʗ#֥(1վl=?=&H r'3r t$_rM<.4XylPn>#+I.p}GmX3^ }=gX tvxL [p3[in#Q><W}> W)Bŷ&Z x8X{&[̷JL*Dk8siB Me;ԯ&9$YdNf5%뽉Ɲ4 ]<G'ɟQkfo JfMq[%=j{5x]srK7bEJOn:,+O o8g,BȖkhI&Yv2SJ !z ^S$n+q`z6l Q*]e$dхiz ʰa(}}% 0\`4i"\*TB0D B[WUܷ˸k'=iM˄lΛﷶ{#34B w] ,#<9j='XUUL&Y{۾3v$cg2^R2+fd'hb"⦒Z'9?֛vj#P{xEsQ`ƑU^P`  !V7PJt4Cp/$HjxFj퇮5p\r)ECj<<%M ͲUA&,I0J#܁s?{j.au.V>* ෧Æ%e%SpVM3Fx3Zғ uSy9 Ja~f0Ȁy\ϴ7jl8ƪ)x$T6|g$v) t@0:DTOb\ZS@X#/Ku.VUvotV,ƃy:92`#U(: A U_J+,beEqA}2͜lV2*{i <LPC;Og@ͩT#?vN0Xh,ՈZs9N56AkTH!ȊOZvs"wx \PSW2_rm,1Vmw$-zjL' ~[zvR{h=.D>|I\;# )aR*m/kQW~X+QS׺"cbQ#-t%[ IT*SD3&*r/Z`1oZgRj7Μ n*ʠYP BL2r5 t[hv1^; }oi5lG|]=iyK-pa`#}Z5=l]#hӴoG;8 ?u__q.OW?U=[BA%nP>oLI[ނ[xGdʖV8sVG}5 7g>'$kwbd;WWo^;>Oi7Q~:?.vǛޜwoޞ7o~狷oOoR6_]^zvqNz.O~ ,ce5'ޛh׷n{4;支c-?W$/d~~׹on'2Gi[ú9?^|"fqbm }g'8,KwFS I@x><1Д zM"&7>:pg޶C۵v۽Ieswf #lXOW$Q>q3LSdNϨٍ4 [j1No x9|>V`~2cJahi~I%|c%&mczDٛs'_EE]YsƲ+,=iEU~бuHeIb H[J3HXZ|Eg>0!R 揨eBCTH^C/bTk8MHEjRe4B+;g\PjaZGbLPKVXQ1Q~T.7hNTd핢W5RWJѼRĄPdBz)v-=h[aJg@M-,J6%>qeV8f{.JK݉1{+E  ܝT";oA7SAH'R+^5NkA*F>:3=5^^6cU=mvZY04 ev.X 1%8n7 cb6tW1%\b0dy #Qgȡ ADH OU[gl%. M[^G1ѵozQ]J {_}7(A.sY៎Y:qԸl߶2 PQzH\sC(2K.S9ҍbX颹h5#/lʹ6K@՜CZq.Qrl{5\50Df @rD]@)`fz@,171!Kߧy+7o4|c'щYɑlܽ95Z|8Zms}bh[< UE ±~w2[`ڤʹ%&ȡq욧b-~ԐiT<,h*`FWf|EHdk6j 952]X8("p]BϏ3mKNJԧպ]W;*h'>͇ƺmF'Sca gHbB󏈭 N526#ɴDXZaķOOzw,G[3$p4 6JA!M]S1}:!> xL}tǑA:(JnSY׿zݱ*t謭/V;ԙN%q*Ӂ꠶I!H"S`K6s񜠔I hm9 Twg|NLWfAӕQaӳ-KJR@Z7M-e梲r[#:qu<ܾ_P -\V-0?8۱ lϤ]&^pS*׾[JtօM,1D @2/MRS-մnN"BX%$4c$J !DIDS$m@]uY;j '{v[7TwL\80wrn9u`gw&/4j|vnz0mRC2E3?1%{3ԡMEo do&]Tćbft90V!Rybq3N "߶G??Hd#(YJ#%mJ/oAP4?x=~yv2$gv./i~s:9`R4FifjD!!_g tDQ9zt]ϕ@? j&"-xR sr1xaW!P6s1KZ%G6#1eE*:bȐ.; %yWljYù{kn7b˒6vAV8U;kd%:pri\gb!'~JE&n K~`-*E&qige-[YxiLx]9 ֆn`l +؅` !axKN/`ry:d1p\iւvL=7'+1•Ξ:µs'Yhd@]I# /$ Wh=gŵκ$;vqrX`DZpsZ֓Uإ6) 6pԇn<׺T}X0/F>+hQgW;-PmaD i a1شgA2pQ9z/eC:Xd|K]kF^)*r O\aVC%5|H}/ ?KF+'XR|&%-qٯ笔f/f@Ȓ!7%2Ð2!ݻ(Gr9Jߺ7'9I"sGտ͔mu=4: ?_ YqI,7p/I(N"FwJQԜ}촺ǎ6nÓ-q|=nGm&71Gƛ7KaN1589 a:O~NW$nӛ^\\_]:<(\|G_{w/7ӏu5WG>:OF>ܷ|luZo[0JopG^t;-@KsZm*Znd8r:ɱ*w*wuDϓ"^߾ɯ/R;[9R4Lx9$BTʩw9a" wmaOGn7Ӑ|-VSɟ#ڑ}EkJ½@m!a[m[$z2Ʋ:VEOXm[;x~x''>R`f(7nWïv10l=jH  I#TKIƴe ތe@,dקxNr,:x4aTۉT JITu`G%0A1Q &N9lw~_}K,V0M2;jF.H34㪰V} Eʋٮu xL+ܵH`#K:[q |_P!ߕL ]{$.h٩'6X~LM67aTxYU&]45=|}LXʡ3i*>Z0J6LRL3ReY>?"sh_ZR Y5kFI QFY(1.O0! Fyq_k8^hLb1ӆf`B.TC.`sAh ̘>/S%4`>SR@0@xn JaA' 8=y\rur1E'Q*=\PrF.X( w+90&kڔFooF5XIfLWKۃeܴv-Mj8N޼ ʊW8[W+*vR Dh3VQ 6_,/{ P$Ux}UD`ibޔMml*\1A27&[u- Dܩ>T,YSԋ"R{'א/QLc/4.ԦU0{~jex ^hƐ(#:.Ƃp\FY@AJ}wƙD KVy&q*/@ A_|)* #p#Km;S7Zz/$b72Ms.͛5NE~D`bqMm6܎{@&'Mn;9I3JD PGޖ UuW:nw4.?5zA!Q[86](tVD8@ȔBǃú-JђKDPēɏzm@㧆asQز=oDUKa8DSALQ&rq~T{Q7?"j8bz#k[Iz~vcBӞzdE1?G}עS hW7|m%P&ZRp8xTTIf:j8-UQBLH!KJH=y+:)ո# ioNO89e r>&9xߤzjzxz}oM1LIdFzgwLב` ,@ς6%0l1]oC0bZi &d!X@.іRZb,R vJ^wVQ q`J홀AJ[Ff @X-߇,n7FSO_O~MD#zM&e:ǛC9M5hR"B}R\U0p5MH^Sը*$kiQdL=ŨÚD="}!9_-ҩŧ-I90PYɉ_>E$Hт^|9&-b{ekAߛ*[o?/Fk$E[OO]d ^JOfq2dbQIH^__cUϽKt.CWYvoOW O0jsG?h񚽶wQdEZRwX0sfV(9B[zж|,%5ٱ@!>q/cT<'L2!\!3KڨH<7Nc4,f m‹uWP\t35zIg?ǍI^AV ه4;O KZR3A]SCZníL/2q`օ*/g]8C@5'Gt:_c_NxO{cUԟ{^w[Yp:O]ngoSU\p85xcmO3S*6^_O3C;biʬι4{ 6Pd3Z6c]ڮV|]%#gm#Ry|vLF[TɌeuO33r2|}|ΐ^eI[ĖWJ2ɢ=8ruNU*|7ת3>1dJ#w)Wdcv`6ہ^"Jζ+zzW9iMU^own8o&OU@}/Aݶ kiU8m|Vt8] tRW@#jLw@jvF˜yvu+[ӷ<{rUKwJFKKzUv}| J#>34cV)Wb;~sh @p4`T{tױҭW9rmNwK\J~K9%GD|EC> ApωaY惨#ؠag[©-\3~;D9͗lDETL6;y]L$S!afЬnO.X3%c;"WmVhi#,~~aZ$yF}=cQHgmMe="BÚ$AY )P)QКJєF8m0hMhų1a1iPL*NZ8u$Ԡ:|8SR>@s[M0K#b#2:+)ւ)_8j.:k"Q\Ft}dɁ(\IlhF@'|$@ZcngPvi_| 1SM(4~Chx*(ChiFFK[ 1@13hofy粱(Y s%~SsFh24-e4?마h44}!l`i%Tc(唩S)]b-tv[Gj1QU! gX.O9?s1x)Łtov~|(t7wĿ (swL}FVhNhp4\ǹ맾LM0]<'䪹퓦2%I][1_ M _+?$JdREH%VZo(R*Odt#3 UwXD Xg1мJZRYAuо6+lAPn=\< %1xJ|YCtr9ltL)EȚlvd!Bju\XA6՞:#đwIkB#]-}|PޭNpǿnyx+ux_5qP@N\mCg`ZMoA\)aį>М CW<55:nfM(iS ys=ڹo_)yNK×ދYSzsYx_)KTS&AU^]c8/}_#8BE,0hc̜@72*]Z AKYed%J7Tȉ ZbjJLYnbya$O1~Y-dZӗIF\0K V۫;]_4zas] ~cG M#c4ɊG 4= +DɌzzVfo.9ݗ}﯃BаLpF< cm6ph`N;-VF=D8F}koL3Z;8g6fʓ%%ȂR'}N nlHɳ]y_:ZŢ.PfE޲->2&8x0k[K="'f5!=׸vIP~)pz0PFd&:E9K4Zmt&}D3bJ*1j 8i}tDNrzv@^nP%n\ݒS^v|?Gۘz՘mi_^i>*ׄ0}G]q pNTˡWi+{o#)\+E*n5{>L>q^A)Cv=ݬ|bZݬ:װ55*-41\ ZJ]B bL~]!Jg"{6Nh@\N s D;pNPu Q K?yhI͜s=r VSsX "q9T)Ey 8s N%a|+876i|V /|JMp4ٸVE"Qɇ-M\wJBI htZZZIP;RJB4~>*fFs.ktOv3 r%d@;jE # ¡/p (zKβ52RqFk8i >1QztFp/qH.4g- G25EiľǁIВc횜i)|(Mܷ~ϹObs M/Lͦ۽6Ҫ/d|_n%1x!2D:=8g5OnabUR3.?`8o-%k){vßTTh`9Pk .&=?c[y\-ʵF֚)} MgU:~FPlęFģFZf"U8T.BGe,yU7F"@dw/T{x tf•/'Rrď{— ΊktW5“oWΖ>.ATS}&"ߙBp*J*pȳ 4؛̱``v,Ww:ZS8Wߺ74RXEIaWm_|F@Ki)9}ᕠ64UZ:/8A4>i03_7lpRDITq)ٖ2??g}{vYI :-:줓W47ͅa}({yw:$ !=QݷC4jK+ ג+ 1!zǬ)pi\KngF #"$DWvbyrra!N_8jI{z޷u Bs~IM>|^uvt\?OZ(50 \[rC*. "'RTX|JVj4 n8H&JI*#4hPw4>sN0:DϹB "\)G+Q<̢*3znkK8" 88JA ɝ z [i62|p2[ ࠥ*]qK֬H:q- hZ,2O9 =ՁOr¼w q@mffqVdQVU~$V*y[8HM|+th4̓ XT~f*OzX7GwS9W/pyø61ʉD xp Ѫ LŅb$2jKFHR5*\nfHdvM͂MڄÄ$\ܬNoN_|QgNu`lhEMa s8?'}qջڝ/ m.vFaюhQGx )F 򑳘(mD0je@ AxG< a.1*GݎjGk:pF1G9Q?##EBA8)& p)%{ $RnGnǎɇI>јD?"#σMeD y g,$pܔ)}Hb,DN':UITՒȇh" ))Ҳ]Yǻڼxjqz[e|]S+B6gWW6gg&LNlل)&L.̨z>\yYyYyYye]Tv͚k>2rU"q ,xQfф@C>FMZdP"P޵5m+鿢nD/CNf㊏-Ah4f'u6HDI EeJ9E4n4эnpS2DN^oE8-x;t ./%  k"rT\' AQ,F6g =`H fT0}xs(x# bPNحҸ|*yYgJ(D$ 7rm:eYnLÐr sԟ bLHg$LT}(Cha449Axl WfwpfpAHÓ+6!IeEj.a j2!ƕQNI%*=1# LMMx=Z( Fa'QjRN`. ,I _[>} a86xEbmJj8,k` ,& LP`i"~]*("IR<*`H܋XC3"LTcKf1>_3 U6&2K62U2D8@rYh*6T0MU(XͅN+YIR@ѥYX`a `[*#,'8̲,AAFR>Oc70%#7:IA+ +r eZM wG[)!r#lјNxrg5xntH#rd2a`"v9OXʜ25T``MHd8V$We!& ~.L] [xAUVys^2;)5Q U~rE Urr(ry(h1ĵEфIp:er XF`jaT\;@d+6go x֑JKLB5BTOR`Ei0׵Ei>NqΟcwÍ@Vo}Iʭbǿ0 YَP G3ܚƃۯ ACkT9_ =cr<+ͮhnZd Ә O B -rWi}ݒPr6-9U{%~jt}d<~OԠ2vPqK2y'C8 Ѡ[uí1o@x̛1o~fE"|D[Fz@Hw_o_g>C^>s[9R#<C(ӭ4,Zҁ#P EO+ɱDC e چ#p &nOTkRYة61Q`HIRLD܎Fc{1 Aw3n'6wƣ3%6SFU`HSPDr6b-冺6%&` H% ^N:L%,s UČ g#>L΄L!M[l0IR{O23wG&e[!/y> 8XX mVX8XAK=|N K;`.(!] 3`R]T--vFq.7K;0xs`6BT <Չ%gZeB,Ia㓒RH ooSUߵiCu0dR3~&34%*T?wwt,'oF;1""v~30=v-8_zUίC}=yJf7Y}j5xUY0y58Wt\ a4Jh a ZIAtNzg:9B)մ%ֵ6,nXr/ .M 1u"`1U\/6I ;DHl?PבȆQYbpNωgt[.:hoڜ}WqNZ&)J6Έ|M$0P.# ar,>,k0s(f^\tvtgbCzv1H穋YFT᜙eM VTn+tϊ0CV%8'3w!+ Z`ߙT"D k/ \%IθR(}rVÏ5;bި<,h/Z+Tηqp?&Ať[fL>CQӯ[;GDV* #jw|6OWݝ9 p`gCq?sO<sz8;^!k0~~aoKXM;΋_EoTlvXQ)s{}x*Nʽk Ǚmܜӟ "EenIx}RzJQrk3!kVa$䅋hLiziX=mQ߿|m*VVĦUJƝ^΁GФ<'Cq]{NmGkz7-d- *θoJϢ ɾq"n-o!!/\DdRB,z\ӶxR-B6^ V3]6WV VB蕝өX7'd Ι$圐hrpPRb# u:J(}Z)sV 5FoBK^ &"M3pFGʹBugߴZ.A2xw-&悞Qb#:c4n糊t{j6$䅋hLqi^+RV\>Jp+u>#Xh6;tȫc pMԽDZS&m>-o6{gBNR/;A!mW|U=+F=do5}1;d˞;mtBl!-͡6^?98*a}Q^;S;+ _90?=`ϒ |XTA-p \'a-fcw 3oADU` ECgowAϴHg¼0Gn}W`): ҉MYU0ߐ^bDi<ϳhyn&csoEMrw8ׄWNʹ GxVܖzTqҧ]j=wU_ ]ocJuDp߸moA'LőG#+xkX61 *԰K><̝'=>y>vU{7YYΨH NùW%1ZcM*ц~7`=@aՕ˯|y,~dz?uVw,$A2N4T+1SeB`&ArsNPZo_c8/d*j:Pj\%:/&ԉHd&*M Ij9$5E*M`[~nS|=ܑ~Ow0\uo`a5S7x&ZeW9C9̚1Ŵ4ʈ&}-fjN)^V-6ՌXpՄ\͕fh)WB:VVeٻF$W%}4 ,? lwiF-%R&vK%(fi7ܲXȈ#"(d,%Mc*撫g7/X.Sh QA09~^9>qhw[G)ΐ&1b歄sdFw&e3dEa3aBϐ I[+.SƁj|wnh ;B w6 Ux㷁t]+F% @[!S5˚^LO}m)CT hPALnV3=l*ZJh?YS'|Y:m'#liq!Ky1ΛhЗ2]$Ak8_*K\#mo[IU™lD1;؟&'ay?"b'9#/#c+z.G&8B5k?HxTD d #{)LXq0LHR;GQwa+y9PɹN xAZx<<eMH;E,b{lAdN`e#0G2̉lRM^w.H&tPu8#h҆w)ef6^4~6LG&BZiǂ!>Z^1b-70dJoL#qL^(E3FRòQSG 2,S)0j<1U.r.l\14zkp,Fc80h YS"c̶IptsY1Cɚm.<趱?!>-(t-ۖWHbVK/Q`{IBs052"\J Q05 >[#րoNW |ӯ)`u>\+`џ,LWW_ HQ-0swSM8Vݔ#5 .5W) b0]jp^.Bej :5;!#1x#dxB Di4}Q6L;}^`"4UW&)Ye{sݥUZbQ-Tj]|S԰MZYiU"(^[{68VTKP~*LjD|N3Ƒ^LK :&:2jgu^9K*PCu㜩DŽ5D+to-5G%@t(]ob-Tx{`La\ZU7)FwKg& #؆IS~\bc+6{)}[G `1BGRQ=^y](Q S`F.{氀9fHK& u4?5`3ßd ;g0c[2 CPam\d(Ev٣^Q1d{8^ArW|/7ooWRyc?;;uTHWbNSRufX{0fh~e>BU}SAQ@g# ƃ/cQhlb(Y [o z7wy(EM(w`>  5sMNJ1xȂ^ϋWg" (%:Vl`MWðւwVWv0 Cv1vPdB'nuY8ig9Zx-qs]$37sb9,Ȼ|(G-l9C(ɶ!b*ѣ,\+ [!`ض=ع1K?X?Hꭁ xc)k{1\%ZtvC:uq/B0$%RCbht?au]m 7sRټ0W Vw-J%$IN<2QyVrWGQ_WAQ}rjE$kVCA׹wh0$N,SLh?LN!18,cw=wɩ&Ti7wZ+u9l{^ʹGi%+ݕ:S̺pN1c`'S)ˀ<^A-Tpq]@:dX nae旧ئض͇ 2Eg]<:Zv2ԴBtG ً˫W$ygV$Y.R.*I:kD$jE9Q^H Xˆ"𨬢 ܺ[)Y3!%ϔRZ(@w3$jtn`TPA>QabE*8ײBE32$XpIH@"hYC1 :b.Dp0epy*pbr_pJ̿!7"3 6&ö _& 0X9L74,6e~e^*04IH?>3fѧq*կN~J#JG{8w`U*j֯Zd#B& dgY7/,{ #D-7~P~@n^ćVUT}Z휁$Oib\/0UN!vMTQo_-g:5+ 9S/b#^=n"KLȦyU%#7uo! S')/Ip_FBKw!*SFuW(?ϗ15!2@+O3LG4UN+/wm ^ꂳF315oVxً^HC*e;ZЃ[dY̼S!?fw~g #GKv\NrW 9Rfg;5ϐAYvł<^Q=z#s_z\(b;cD6M֨PC̞g+E*aUI%]4&K%ͥ8K$E]p [+HuMf1 % }x&;}ÇĨ3 tYϥAqzrXp+~1Tx yOͽe_[+T12͇_K64Hs?ve=!zh~irgU \ e.URۡܬ!wjNRf"%ldt9+?}foޮ}Nee߮}3Tڢ0)_MxwYk;Ooq+)m"/`1MluƅFedb+~ uy J2y?=/~ s-F7$S[ofvbD$r<n";!LeWG7ʾZ1MrŞH3%7DHLHYhAvSvE[gjL[3i)o$XaJ:{eL#r0+YdRK!2h+Dz\!4jlFSt(*f/ɷ7J0!)ć) q`LdHA$ 7[w[ WaBSodQ-:{LwPG PUJ,DBhuEueY XBO;Uc|'2 ASi8N>#HKU/0Uݔ) mW#(O4)իX,~b&eq'L% 3|d$s {Q6:IvCp4iC"m@N6)TV9<1U.r.,8w NhtSwi[|˓fAuZc(aP2Wpѳy>7l"Q|X!!%Y)l_ )BOžyqIg-{ݏR?yt?͋!pcS훯wr\@ʻ|O=1qZ,hwvD)!wGh{+o]r*g]ߪpEmcSIY,W~b[_RoACޡS=Ro@"Ҍ;n@pHټu!`vu Hfe ^LwT VȃRŐX\Z?oS$ ~uL <1rމAɐŠȵ0)@I7ݾn$SNQνNG(!ٰ #eΰд6]O;YS>(:zІKbr2lh!EDEV%OYZ ]#FH وU kܝjو:`yz=n?FoRzigҳ&}/_ےi^Bk~@u5a[S,ƍ8N2ҢE7xSI7UI_MۖpwZR2DDR4t#Oq3z`]̩l sՒ+7Js˥oMD$iݱezK%q4f#|%HxgcD uԺ BCh4/4v3e&4%);ﯚ1&À-r4S󫣫]k \jxn5/7=m/k1JKO k=17A6':$;^ ͣ}À0i3NMN98:A,dk@}v'/Ds&=}ƈ&D+ݙ\ER= X+JK6X?ZTu:JRڥOg1]WƤH153ƣO_nB^hE R ~- 񺗤|ivvrBFܰMh+A~~`FUK;FM.OHnNPߺق"Xߧy rԄ~ !!Xɋ1 ƃ~N>Y/b"R&A*G3\{%mU+1K'.HNRZv T[a)<ż`BIIU 2gcڲȸKLG< 9Gw{.lyF p`B)٠;ßP"=?)᝭YA{"Z,9|Q$- `Ҧ@<.5sPDۋ;WN|H[>Ga^ugڥnي%XuS(iRtYy&R} .thUn 99_J$z6gdiHa7@G"k8^OFȌ](|~4+3،j:̩,lj 8C\toFm~zz RFO2(0 :U)dKDRx> !?[fNvpxٜVc"ڟV@YPoa9o UGiX[]%8-a>1̋f8 1{uquaދ5P#tX|.2,26X0x0=XH)Bc{%)3Ǜ]Es~l*\\v5Gi@Ww{PISr >yA<לyqHWM`N+LDe]t/ [8.1E# S,FAH=Т)Y;Zd f-r0SNc+X + {\ g$&[KV;Xdaa+M׬2>9Pou|2*46hiX8G3b$9fH4 '@Crw+U1"<섐[ 4}6sZV^gĜ)YWӂ'qr; ]kO_'q-䙳\Jtj ^*sQ*>lAqY|J5=SG'_B'_q߇Nll!W>ύ,#U! A6뿅?3b4R|Qp笊V[p¯,q)װeoB0QY'Y'!BK >W!l#(jQX `BEX,~R:^<ӹ .lƳ6knUB@'"DIE$V,`WCXjA y@IJǾ̲&MY&]Ez^I*ZIŕږgFxͧϣ,cm8&VLZfǬ_vm~L;IvEt2\3]vet<der-85 cH4ca] ^;RXމv!֣wtJt`꽚PZdػ0pYW㒘"x<~{e 4ZR lWtK8Isd$0s h7?޽zۗA%G볫Bڧ. ukO If),ɣw}ĄgBDבא>EJeÍY^1V.ҨV!SR /~l47quor3kh@~͞\G 7H~fmIpn=pRHp0DžBǞHi_Irsv} DW77ř-8}F3N3h֬;7?}ִY\~Y^܄6gNI>KϴIǶ?]:ٝ:3$BRfRIDWe@ BU#{w3L5[ws qiM9"q { nR3zt򔳺8D4^j]޽דBiҽs'ty\{9/⯈Z4GřИҸS:@SQw`TQX&z3MXHZҍN;QL7ۆ7\K_R-0GVY 2H2X{PF,Z4zAgzH8S y^r|pLxR:5bЖgjW,l)!:=F6Y\=9^=qjSU>(ɡ'!,ݩ} ?IFm:m-i ]kjE֤. +{[ mM]~6Hs|_gJքu}ZO5Wа .9'׈ii D7zA5#䓩1kd.Tg5NQ< JYP6cmQ4x#de;c}C3ltQ4JT.FĜ͝ q%kF 3 b!IF(W0ӡzB|Xs/ꁿ_8{WtNߓ(]0[`࡟@oo/q8z\] v;#3}7c>PcϿ=tgJQI/f>G7TQ*+("hS%peIڂK K2A5*iZ!âW@Z/$\(ӊ0uڹ#hxj%xKL%0x|/eR1꙱ wf8ZD#m!Ob[4ǃoC \NGC~,,k cj:Nf˭f+m#V1,81傌2$&Hj %KD5s'HDpl>Ev~2 ~F|`f3A? xǰ }, bq|9fHG_3)#q2)#q]HXF)c!JE€#=8[aA(r\Ac)l-!"!} t*ANj{ݰ,=A = 1ٖ ]/sui\% Us3J:& m/_{!>Y䄒.iTx6SDUaDtGMyTRK֩tsxnBu9Oؑ<kX6\loX؜C9kE$wA(t8wP*gkus_О^4#BۃILj+Gh "jhetyb\ֺ̮<}e=JAo>KК}x;~Bĕ"sVN^) 7@D!|L{ ?hvu躟Jv{?3揳4CKt >>}5>ok~]h0swqp5S#%xiy1<1mjMo݌<>C!dg++DK1*to!-#/c, Xk]?Z89b76 G0"; xJEſ'utXP('㥁6"_`gU'uxI7 L,+! 2 7ZazX9JlJ2"N #qa-Z"%$|*Z-A JJ@3*X=CL.&wnq>4ogߥd#ȃ>}2ITWΡ"?ަ0~NgI KJ牲kҬ mf}~~̕ǔ/ljMħ7`YDIB^VR8U~jZN,ԊiVEs[Ep"T0R_'vLu8 H.ZԙTK@'UAꕋk/ DGP$sQ)3JqQ+īw%%6EXF Bx4A R*4&KVQAuO=@j i0`Jq!KQLNOW0(XQy/4C O|cD!eVq≧h/ I 8.QBEdd4\U>MDSg` &B vfBڛ9\mۛ.8&]x(IQ]4;*읿uR(ʦP*xl@"|=l5} bh%2F]{Yѷ1,Ff>8_%\ 6+ǃT0}.srvXNx3>jIWtж#SjlQl,lb*Le?kH %v3V$uJX XX;Ȏhx3(ghBbayVer=Xwgkw(yŞw\:uW.?GN6g_6o2#(\/gI謧RyYz#AH=_]JeWq:W֩hGթaBbf(d(]PBۯ{A nh27^ԺLRҫuzvj1/cZ v /)fFSJYt)se6,|#4]wB wznsDQ ޸pM*3r%B> _:PG*J=oS&fFֈ DxRLca7M@ƌ/@Y^%<ǖ$A#VJvΥy^l'j5xQ@HxÍ|j[aP))fi"`2z.;2\<4I XOmHX`X4ŠΤjDXL?Z J3:ֵP[ oqIRpL2뇥%7<|v)}#h)_?ibo󞄇 @ ն|7'MU$FxZHi^DaӋ3Kl7i  RRV䥦jHiثz7Ճ@vz7_* cB|mhc{*{alQzK񩓙gF툋6[R1.0U p8]u2V|IH8T~4,Oz Sg]o) ݧO#PJbO;gBr$b QDIe%*0c07^YM\v fzzlu)3Jg:< E c"[qPVe lAGOCy ڥgK Tu/͎'31̐/ˌdBPuzps df~aF2JQ d+""8ٚQ8˞܍*dA T{QJ[Yp m n0,Ʉ\Q"3v\'Lc;9FQ\X53h lgK'1QGŃ(= LڂIh HA#3ؙrM@`kA{\Rs 0AӇKEB,.H͸B!uRAttv; i*  `YU&q.ƒw:lS?gﺑϑѪQ'SI O)e+Rg3W_ Bq:lc%Kk9 ջ<8<F{*7)gE3+,8KpQ"lX\[7΂ͱ.Ĩ%6rSS\@ؑ$ӽLc̲]T_!iOWN%08R:08%7Xt(A[;% *0 ѓ})qHS7r|[urVIs3\݌,]Q! Es3NLȂ:b(.s2!BG)Dm{NΕ@)*?azXuXMό[7/W&]:ȸ?|<9ʛF׭?F O9h嚣Ys9"b"5]sT[z_[ogy0ˑ魟6V+'8++ox f$»C\wx4Q?ދW ;^o,\{P"׹wjUr̗:FχU1]eS:tz6<޺,nWjdHicpV&qBATh$hf ʼҢ_qm卮-U/Y)k :2η4I%dx*d>ل*%x'8)'8ѽa2?!TSzsS?} ?"xOsQO7Or~ aHsǁ@]H?}Hṡpr"Vדr}2SGe(Bw"mzcL"=Cw!t,&x 8f$LjIӝN]FQ8Z*m/ffeKXyO3N>~Ļ-<ϏlpaG~ ^:_|=:9G^<_o?϶maß谁+$pO':Y<0wt.)&D!pmV3}s~SqsA [Un.g4)c=&lk5w.Ru_79;u$XZ49f 1k M*MvȊ B;=U F +Ʀι"O>P&uM$LB1iR6J"a\ g)G +&&UJ*Syag_0L̲ &2idA`#q !p|?DQG#DTcBld2 <ǩ|dÆ{4ZRt%$,|ۧNHܮv,7/`BX,25XB)糀WgĄҖ"}:+ iwppfWUn:R?h%RۻuwnJ⴫ƍ_OZj_wIgKbhHnE7w#lM0jzѽKݙ;>pЬ~% }&Q?7! n$yH\iK)ޗ.}˿Ēv3Ufysϻݙ!Z8Vh0ӱi:;o>stAls)@Ӟ%ѹ͍\p9jUɬ :x3 ]$^:y:*%F^`ՖXԳSϜNT)=2tD1.K-3M]#=6 N1QppMyNB1/ҮQV[p^?lXDb5( M(BZ`5$'=MXGV2<: 6 7i`L@Tqx[rşLY\9iHZHd\B"q10#l%NHzpʐ-8yX1`>RF*ĭj8| 0=Qʋާe_{1 F!9`V/$ "&Y K0_F.<"dsH#XYTD3k74`]+spG۷|v)Ø=5Ýa06V#},߬jLɧ@ټCi͛'M3n2ƥD뇻pQZo`bdۏ;Xg!+Ju͊eKf(5 <3N;ny;.qT4.A>bfw&t5~lRrnOA`Eq;:(XA9i`kuU ~6jȄ^?CQJ R3s]/J]{\ ΁92_(g̖5x:gudgfL݂}rs9M no&g5 E~ʓ& ܡhU좧&fr9NQB<6΁yG|ugӻԦ23o6K*)[N2߮Piq, As rGV&XZQ4:B\0J!dЈY mr,˕)O4{,n‡b'RU}FۣJ߳5?{ƑB.Ge@8,l~ b$/ ~Ze=b{_G|93(0bK3lVw< UǸ%y ޜYBtX0k-.) M@." V,hMbQg)X`}MZ*b"L(X풗Jg%f$R6ŘqeKTKLb^8,ӣRXu{{Tm kJL>(}ݽ锵-(v ApGD)":#i^7p@PP $^2HN[ qi3)gm4I48p1B~Хa'EX\朕(]OJ; 4Ƀ dc) ֭Ѓ^`h킷Dpcw`ɂ%5HRK$$K8."DOB_+lC3";0,8~x"h E1r -bάqtaH3*u+dnM@ILsa8B2i$<2O>Ҿü) Φ]V`}+RWZNޗb'sIhwL߱~03ҍBMbݙaPPwZyRX&Rpwn\,<`В^|HK`;MiN-~ٍΫ;SO.Zr~xSi~48t6t6/$Q-M2&t{~wzT/iWw􋿦/G% gǢ7d+7N=W&O˓L_Nk~9z3$-㔺]ɬ()lŃur]`/ntWrʩwGmcj8yp/o9=H+-, P+iS?yj>UebjkGreVڪV(ರV8$咵cRc*NޕƜXŽИ)dX )Y۲Skʑ/J 6Gri}m.{1 {Ԏ4fOk|lɮ4Ib˼*  ^y\QrMNzYDCUDS*@u2+ߊ:,_CZri$1ijCl<3bt`dK"Y&Hi?WJGa%]^rFus6 Mގct[nVPUkж Պ>DGo]ů(׶VuŇhC\ d׉ aeMڍ^n iXWڣғU83lqfG3Q^ٱBΦ[\g7t4zw5Z99`Nw߆qt ڸ d5mmܽGEah-n6Ut0Uж]K[Oߘ4nM7zyYWj8704֣Xʭ;[CTYʹoXGTX_-Ga2Yū4hRkY7w9=H14r>81bW#rpy+&ЗKBWGպ*m Nܳ^A5;qzamI :{h$v"(뷟{{Q-|79S ͽ a,4RVA9MXPކȆcfBs)LFM5XWҙɡQ.9p:!Qdbl>~̬pwBʑx[뼧ߗdʪ,0Z4z x̓+5՜°}-.Z׋ -d 08WBTELWК[ϹhYbZ̴muuB e!QS%t~Q||/޽YudY2x?vE[#$ Y~/P+A贒t~3mE"IX^x. ش)O6ѧˋIgx> R۰<#+buwV̩s2EOSVH\)*dh3KVOy~kG,=-g"WT>OarI rsK_~}_)Ώs|r 7RwtR{s3)E K2}ҍ96*s' 50DlY| PJª :\&.."+vⲸ bypؾm&sKi%7s#Wč!EE#D/v)[ެ5j{0_~9 qaw7#P֌ٺYznÛ]K FM֗fI9KjV^AWþ~& wU~w_|5&V@ݠAKUÀ|}jrݛ_ (>{g"ZuV>w!؅*{o|g߆jY+hm!ȖugХ^=_m1  !4]7'wY1ufDZ]4aY p򥑚avde7a KIA0)LȽֈ1f+-kK˸sLC1/aq%1>Yn3ck4 <{n-5rв Ѳ-y5.ʁ1R1+(fww2@ A%mdHDAK2aAemPS9&,Gch1G[`BҳgB6sf$kd;a=$UW ѯ3Yލ4k]rev*æY!m$gZU ֵCrsAyqG1i{g-r"_A;W Yd7QNH/B0g_u @{uEs}s+Ԣ1ԂK*Qwuݍ4+~wbٚg܈fcεղωey a5E+=JLYyTit#:gߐB]A'.TQuT h2;ƅ4-\Mc+܃f`Ʊ d,;k+dO32f4zsrbMO9SQ@!Q$59@e2s:;}2h<O|x/:cp*p ]grFߺhB,!<&FXwX+apwW+(Jө]+6ɹSv4fe'߲NB U߰Ae7Ta߅QH?}K6ͻOdbwmm#IWzهYHu(B=p6F"^?x~*G>L&"=.[ut= ܻi0cV<7E6j?|c~Ko$V\#*3Z &ŵt Ԏ񏓱H)Mշz=$']o[3Jz1;'HAR:)Aɕ |z5D^N|:?5;m  ncK-bJ"A.ne@afl// <Ð,I'Lӽm1.aFAkYSfl)9{oZڎ 2Յ6?o ʧߨ~@Mo5u.jVc%cW"Px%^+7roH!ϡp>\ٓ>YEZhC3O:sJ2mj7vAԩ*v`籵[@ yStcNFVaPrD_,Ood2M-O-R2/Uc)aàцQTݦk.y =>EU1ZY,3>saAHHE܄Ԓ_J|<lt#PPSg(A{hAbT.n2KBF37a2jJ‘E}Á)(|]ec@U2YQlRS;#KP*(g~K`avL6O&t+%1~HjC3z#k],7 A{ ߭Tl8M m{Dˮ_>?k}, ;|ntkƱAh{0L,SX5!p.IWWV6y +hyGk%tW8 BOe$Iu2D,2G%"4FG+I|+B:XѤku߭Ǡe^N=\"h;ݍp:sq7p*Mz )0W: E.-Jl0tSw:U^)3P:aTrR2qAȴwCg9soY w>E-X٧Qn|wf8޴ay 4/f#Zn68-[ ˉԷL}T栘eiBzIbBF( H8؈38qF ͱ)< J|2M(ACe9eA,TJ͍!sX*3J ,.rEZT%eI'H8LBsYR+S6X(Z!`@X 撱F neFٵTA/xmL Ձ E|DJk(,d ī%$}Op6~›70QԽ~0x˽!.CP0hY9VTR#{ڸu6MJNnRUu^Hy %/zv -!-VK*jq%os}ZNw͂,nb4է%Q_rGn-|(4@!GXtrѭFQ*FC4 ׈wDŤ(TLZ¡*sb 1j@@+,﨑^cY~1g BN1WuvxFҶTǬVe8g"WͭO*V)bIFdj-\l|U=E<ӥ&HNڡD)MRj# I)r" 3,18 Ta$BUPVs>V[MfFR53Y[!ڽKKZ e&/9=&EY|/B^^ ⊁ rZH%/n^SN?ßn""A8)&_^>Lg|8>=OqR!t_)fh9i/!8z/l >%`ѭ>{zFhޯff?7Hh,Y2aTjZ@5Yɯ/(tYw{oZCDlUJdAwZ\:3,bEogҒxŗB).^UҲդH-^Yҵ.Q0[[Kps]d%J~7@Oj%%_kZW"AaOn61O`Z=WK@\ hr_R_7%ǏU>k@Kt>-:m}0`<=q3_ *F[t;߷ߟs^ID-#g{P:pmsN)J61 \ $Jm0|mUFsdgbH4 fB2TT#UPǹ/CYij79N#<ϔPpC b8ԀRÂbeyic8s*@=4Ҟ"X#aXD5W*aZ+ױt8*ܫ3s*f??& t.XA@A6 jhfHd- 51Q{ ˁa4.zP^WZ R3$&!wLmӨvwQ .iºm.o.rQ{{gK% 󫒵&:'l* [8>Lf3-ӧ{t^] |Ln?6Bzz;[}n:Wm'5 _. vے.FoDW`1qmWu%E[rj|1*/E% (r%Jab:KqʹL;u3 n'A}Mg̩~|8{Y9|/|WN}#$KT-f^1F23c3|ֿ? Fw5rf:/<Ͻ$GV)`t2#x"$EeL_:Xg.QVԱ$Q`*gјƉM!!}s__uoJx\V6eEu v+:ik{M4;3m{J ⭗|QqXK: W?{ܶ?ܽlɇm3֞8igx@%Ef)Kˢćm:Z@wB+U]{ @mlr*e@Y*it0cq8 Y嘢*9JǜBajF!<-U{NX+"\tt cܢPxkWW{0%*CulP3yq΃aa M*=rPaY\K K.v),q\QX0w fgcy[K2yz=_ߛ;L{؍zv&ǧG ]EGB[n*zw<\7"iwv<&~a؋}M۟1O';oޓX?EؽfA_c]3`/=3L+sI|\+d&w=v~?΍>ŝ{'O\Ɲx?~/|zz9x_O2WW۟^tO]gD3ٍlWè+#]v]]tyW8uUW(*`u7 Zs۵a`?fg&ɻ}ŌR~W?_OW7fw޻ v<KWn͙p?ؼ=__}Y#nz `ŝOoݧio7dۛ^䛸x{>]O.|I=7|")7l(@7p\ۭKoz~{I?C3^){8,~Ez!b?x`xٝ\ʌL&=Z~2M7oZ?/WaDw.Lq|Lyt=g/p>aP]O\*L_\SؿOMۏt? i&%q"8z3_F/qӛ f8W̾{?OG$?`DFx1~{,['*q?O=ddI@v@fqTҊq;O变g !"m:l,p}4pV#a^,TT+Q9F}%7Nugy&h`[xi+78m2l}H=s?$LJ0zb.C +^y!$%b.znԊJ[dX$2d`}; 'lJRDfR0+Si5ڕl]: `Mv`7üʈ@'D1i{gW$?,eS?}ߵYe8vGv8oUʠ`2'RMJ."ea%V3)Չ\r*~ -Aj~|i`&C. Pz 7i3%`93~lN,IA$ϼ9g̐ͅwz=>;uz׷䴕c/L\"Ҕ}i6CAsn8h mY$ 7l7SvѢ)x,&qen+~MO~IwCR+PVmn];7;稔Q$GK(-t?(H(ñfUvֲ~1FB]*$*%VGL=״kxT F0GMT s!U/aU,/Q vc1Q@9 a !NHR2 Xc9@3bZac$XI `T 4 `,YISP$ևf#1OWQ (jixÇRG}͉+IuтhXDSF &&DD#Kw+@h)u같V5ۦv55V d8vV[R*i|nJecVY>țXJ`܃C`<};1m];7QV 1դ5WJ8>FVNكJ1.ϓ]ΌEEZ#wy.\&yor{̃q}}E:~3;*[JgFR Rp*?m];7Q.Nqy##77žw"y##:E:G cem)̝ʽ Mk n%H8K#Yg<3& Ym~%2j U *áSGa௨絾*Q|VQJf־JGDziZP^# ~k)x9A? Im)y{Y#Xg%~ZCKu$MF[gZuR6l9ȄS̈́EZPq\ʶI(W)&lfV6:lyI02NzTf9^\|wyJ}q_nm?NUd,Yo/6$#m&,Y/^Fk0|0*۳Ca;3q(FcfY5Z4/A*/Wi8GimћM`=yO>bܳ\_/f! hƞ{w$r".< 8^]q~do`FV3[h<^D FYA2Z (<ûb '~#(n*zLTֱSf 'gӃo25]5] ؗ))̩CDzIGeJ1|bf>U,2 =?f@96e5j(<1$sQRT∗7ZW~3IU}?D v/ΐ\]JɘOo:x:dgm< :%?9\_` FY%lYZ T{Au~k`R`3=;CafT>FI)0F*7]wGIBpͳ7y?B7TU%,o*|-{XA0 edX)dЖ'ت*YTLfgA`CdjWyM'-"+0-@// sLYG)!#ڜRf3$j3˕N"Tǧ̴׼HUv@ʪiq]wSw! -h45\bHuuH`ٻtA.׺n. .37a N`XJ!(>k4+'y9\>{zDo- kf2/\NI}8+e tE䄦:f܃Ll=ϔOo~I'ߋ 'GZOyF\Ih91(Zhd@1!!:B\MHd dRG( OhN!~ZG@ WWC"LdP hCa(G Uܙ"ƌi`+$F:킀j""E#Ra0HŇ%FQ@HG:sKXa"9F fQ@TH@2jeD@ : ?d}g-B$e e,ޑX菭 (.<Ofbi\V}(_ہ'.y3KsiIv+e >jbmZpdփm$J5@c`p9d8XIaL2Ý &xM7?ӺarGATUA/20[ ={ xr`I֤2~9g8Clq|? s=ف`K2SC%,NcD2! -B`&R[N+X lY>=TS<^i==0aT0V ' ɟ0N*a<6ަ[_-b`o'cxL"_|*x5YV8M"*ܙl7o{QB VSKry$"N:y;A@ыh#ܞ/ݫ_x2ے>'+n'Zu{&n{3Nh @[HݭiтI~hW: W:M!T$n5v^rjY 0Wb[vb9Z!܀4G!H>Dq`Q",(Fan#Zj y^պ ҊF٣K(oz\pk$+k +6ۏ(p1 7d$"0S'߇vEH&MB̞44UTƾg Cwe) 2@0v <ر/c4xzKR,YĬz]Rzm)q4A<*UبpĴlB%L] lr=z4܎~̧RԆaI1.gML}]?/> ?8#w~ݞ?9ggeg5#ݿ{")ŐkkFbRr}hG_P|!ߕL)g4;ȯt/˱^?JWH[`;ꊷrv5g51?"?.H]8gnϾR&>0$ʁM?.չ>kyLh)vUqL$//TQhnUx(dYHC Q{kaI .UL< 0F!>7)#5š__kAx ԅQ@Vj8H>z^ÉYR)2Ў}ҩC̼!a.B$YB9ţIԇab$!Br7Jt֖tM ZBf2HM1X%O˴RBDE}D\2-d6;,aAE^Ĉ%Q-yTpu2?pڲSs f0%m,0MjIh#Qc}+Yj3}ܦmJܦmmsk4\|{6sH DGQ6* Nt,{)۵0RX#[|Q&e2f7c|p&a tUVBW+[%G[#ϯK}?~(E2ӛˋlh·v4~0n&)fК|dL 'bEޓ b&)/e=?-zoJYw73ln楆dZnݞK=]/{rzbb}Գbẋ>6gHJ̈A6SU4nN( RqyGJ]n |㇋R0ڟ 8 zUA0[ŘLI5Yω)%tsVd dz ݞNWCo{0h40YvfWdVE*{60lUeդaHrФW2&בK2*ǕbV{pW.,Dv:;RѧadǓWz!DGО'u/$,,I{ʤYlxv\B{1'yWRlBQM(BU[[\-&깲H}k \M񎇛5c.\=ݬh/zT ]^>ס9ȟꌌ _ ??Y),ɴA{+s#(=H>Ma Ng?^ޑ$^~-EwyGO ӎM!N:nzrbnS}DeP 2x-ཐiY Wd{; ?.kiy0$ !QHKVQ"SNX4%0@-(D33/.hF(:zF!%\,w Ic䟋:,b$hL rKg!)RVY%acHBF#F3CoTցɢG?C,>({<帺]HG^ެf]])mn5|fpl?{R1r2=Rof:5;VՌrpAU} M6 kSkZ[qN|zrbn@k*m IK7g#8[ ZHf_iVNfYMzv~Q6D,poc0dˍ&|"G  7Ռ3h>EK(@/ ߟF wJ魲y wΌ\ Xj+dJry!0ൗpaAeǽC6)͍Z;g6?tY!z({u}f]¦#p\{Twu'7shz206齺^S꛹74tĎH¸ۤz-nj_ۣ3J'b$)eRE) ʫMHIt" gULF~Ӹwu/NRjvCÓlZ}oa@۳zzrr&c 3 := >p >saU5.yIh dվi+c }f3*$R.\M_wY}UC8OOONӘ>ڠZL1c5EOOnl P +؄ȭ˧'wO%(jQtj.>)oL{N$p44J6*8|3ieTpK:]{DFD _eEGpx%8,!;HHdϳDZ!2Hk K=fd ҋzE$U%.z8QJZI %OL?ĜQ -Э!$vnw8d.G9N};h['BMH *u- H"'<k-Ke=<\h1o ?ٶ]#eZnEz1D\(n`hxKmPR=۶4Yv,%mYغ{Zt=wOBal+`+QJVY e!PMQ0'ldp(Ei,'c81~'.ha]ocE ֠X6YA$TylCA_?1C4lr8@1E!tNHIk`bQT~SlNrH+u+ Zo2dpZҴ{lBWNK(q"5B뵴a{bAn#5J5mk@r^tG~n~\;;ݻiz/YcK?>SU0Ǹyo/EX'wpZCb:9tɈMiw!( YRYjWWn ϩMwG_~,~ؘiwZ=/W>]Ide\nӻټfe3hۭ䒶q#"̗N xA]܇1`7܎=Io~fԒGdGSb^h7&Ll*FpӾh?o+U⢽o]jujڎ<d%PQ [*%vSHCiPJ:x@(%#5ꀎ%kg G165U;IFQe`5j4 #QĤŧn(w/uQ )`c(ئzxM$Vg<|`×kݺq '(6ߦS.}[#4&1??[lw=+wgwYi8߉{u}h-Sb"7y?]yY~ &r23*Mbۖ2RSv*XcR޶/nݯ|$?ז3zU5$ynϹ-Sq#T+{܁bQƐK 2VI^jjyNjR>j>7t{s;tM㧿]I$"P֗ӟ~)"iJ.FTrIRemL!ze! U"uUBHb*Ga_t\QHBG`\U 0Ŝ'V% eb/T ( LEx@,q9(==Qಗ;H yLNrF%:qGQ2 `~>֫7EP^hbDjL[VZ r  %D9G~! j+MKm~2ؖU0vH8KR7o}M ; A|(M8v.y{ U}EAUD X B2_ Mjc -= e d"u@AަSQPP BLj<[ԳE WAC\_@kT˽dU\W}s9JX/o޻fzogˇOOi4"btP B@%>*؛ے˪9XgbY. a#@qT6=T1d"rRooσ`'q]j]vE͡L OWz>:t9u 9%uStGiJʑ5u,J":=r^Vc襛 G(5Rxڳ8,ba⬖0?ǖ` 1h]QU#La`X>ѵuRGk3y\ pRH.#87&3s1:1s)ȏ Ή܀΁ 犜s19ۓ򧋾.C~ &\ O!' J. Q埡~;1ifFhpH=^ 1p;B]te`z6V"|jtg b68aP ki}[ tE-4 f45V戼,e{Rl*Դ/64"t/mIѨ/jF(gZˉmJRbĤQ @I ψtguJpT&}[7JWhVuR&OgvĺE+-]iQJzWGSUr/QB"`s,K2 DSc brI9G9֯fo˻oWU pff9Ӻzl61/?OF *$G6iU /þny{9_~3zy}{]o_}]ڌWW/9&8RB-gRq\nn"s'4)0pK $'&s#rDNV:BzU1 P RXEFʬFZH8^Wg5(KmDa O(2󲭂(hk8-{T 婼`*ɕt}F DOz ` ܃yɈ&!blyy`lF3`i) /1-&(z rDe:n. I81 FG׏U!g*)K}rI9(]>Ɓ LU~%rٻ|@8NJqq+!Gk6=rE3yhc$ .. T>"# sdUۼD {g {` dcWgcI3QMJTA[hMjx;nq_虞̍ݧgڑ1mi5ib)Ў_mLLaZ;'H F /:e opC*ITpFP>c.Ty:J y>(U^ @q~m(uAyJBf}M<< p,:}#A@D>J`"T$^>6A9dB|7dM[J@ݽzsmm,m}{}ٗh#F" ۧ=ėr @!bn1) %!CNo=!;ɦNdC*--B"B00L8#!t zEThفk=gLVF8&ۑ(/"?_J-sPAcfBe)EJ r  %D99J2 P!6tΔ`>.e F᳛O/i;|]Ǝ.npr!wzAw.àR]^ @֯56Bjk]/`A0Gv#&:6t.TV4&u.L&,i\򋟓͡tL Hzi ³^C/ r$l\H(UK즨1s,> 1&P*dRN$9Fbp>|ϖx@F}Lʹ)D&c9!1H3@h4GF0sÙ-NЄ+J%i+31R 7aam Ȅ}RHdv_~S`Zoww=`tz ֟}qV=rr%7`' B+&رy]kM}caۦaz}- s0F쑇-PMM[OU[o ݫGЖ[l9(9d[H0hm]+M.|{p6x'D#4,7GRZɑ[b'ROܶΛfe'| .og+]í٤]=w)~DcPv9zm®A;v ʇ;~dY'Ytj:mM6Vcc\ո꿶s ?6S֚-OͫeAh3ԗe$KaCr:m {~|L,g$ %j H)e A?@j +EhCQoJ(),c9PLO"#Erq`@3sh6I\mb6M o7s_iJ 9yA3CgY`g(B\" jF] =Œ 7/1#gN XJ,˹T#֒IkJs!3c]rH!1ps6*$uY,^_u VJ!wq> rLYTPu&Vd{./ki$I˞=s ~ABH>P^ئ{zV7l 6d2F42ȌKfd!lP4XzVQ_hKR !# \+sQp ̌er@0Ec*(i$H/)Uy 糉IERu6e @2n+gOIIwwׇeg0!ӥ_FW{EUk/gAM >^^_Np̥~GӋg(}D*ڀB(.ON DLof.>d7zF%!B)N7I0zscvZe0)  E{5){bn7fvq:9 Sٵ& (֢=^.{9=FK}Sj 8:pwxl$ǞE"u5I }.kVٞ\sSjfՇd8=\>Wn#T}|%5 -fN [[埊#Pu aW:)✴>}F񄒴#|5JsK2=ibΰBymbA L5"9؋5aPQQBFfjQF9fBn|W?FjGΐŚ>6yrڨNκQiú+F9 BxdUle1W&'[}Ukҿ2Y>CRn&i$;! ISfXKx_: 0q\mH3}V #ߗo[Yd̺v coL.:{ \MF+[qTKZ9m Z^u\AC󴞵XȰ/16e抽N-7{3DxVXxS#>xc+UZ9:Rrt :YHbRU[/]xpb[/Na)]ew\nd 4}(_tܙ@–-1Hr}H@nM?moIBJ.Yy閏IuGCUV†-+.p~bj㯟l`їU϶N ް ifld>jMc6ErE%2iojb:o%b:zCB/%j-z9S息K\ZU ˆZ :.ǪULs2X lh(0"g' .("2zIXhpŢJ{Ɠ#>":5y՗CՀa˓}jF a|s9]*XB~}[RT,-uW`E20{%vgExLNRs-Ek=Alk}͗ vA""s-oO~{4CeMR66 WSN:[-}Jh?7Rv6 cӥv,{XXc#] 8+pE[}LTQ 9W L?@}ɀ^oŋo>eqcMF)M~f&FQ*nN-nBVt;l=7Տ0] Zu!LY6>2ER]D +tK0VѠAa0Vis=H )aJ|ݗń5} ;Dgzf{ܢw,K,gH|X˙m_9}"L~ڒٗMW<9ᔩ7JRJ{v}~^)+z:Y8`zAڷ-)"܂~U3{ů U6i0"(G/gzkhee*nfh¡ɖPZ5A;tsB5&D]+Rj]75m _Xot!v/kr V$BN%Bۤ36lw;Ԉ΂~MfBn0_W b-4k镙^Ҳߒѡ!t%\7Y@7']]ٻPe?^a{MYi.A'bUh^屝 5x ej1 ՑcT|_JVC _ی̾:bl3G05J1U0}aBEKb(XaDN+Qdže:B6'Md52텺Yr޳-LOyq3 ef|Y1&RVcLn[jgiM{(my:vW,!R ޘjDK>\t݄#I8yD1va~l tG[U_bQG3QmᰦŒ%ӎ?_6[qZ q/tTwMP^a]nE:8J.È_8ce9xkűz[*.A`ABQ cXǜʫQ.PeMFok除;v͚>=|&sNqN Nj<4xDľIF [, i|DZ* teK(2BkD PHg | | 'hPrb&#,lB_)ܚ)yʭyrkWsk.Zpu1 bpjqZ8;A\Dm͜:8gz%'诓FO=xHf|Ұ癙N 10}g$Cӿ>Phv1Ɲo>t5eB_G~1@S0(J󩷸dߠ= ԇhn/goarVZ|6&Ssˇ .}`wq )<5j\CuLRaBua0e+rFŹ3Z ִFiǒOeZ|Bxg= X*?d1AO=ǻIV] ԅ\y½O; ba{4 A{iL4@G>*28/9e05 >9#A/2i5'y HhCג8b0V $qTK&4r]Ҝer"rj*` DUE?/10^^IC{I]}Lg?ƣE3 >ݎp"pmijJi:twi `i~ndGC۳ ə{pP$=4pĴY2@FdܣWCv%OߦÁ)Q\Iq{#Jquo%uofTOKMKuK3=k̘$އcY@zPXp "SPk`ҪzxЮ_jzؚ>^Rai3[ \(69s wu:L(Dy0JɃߌCDReΑ"wɄ<0k֗8҃>Xk;R< 8#1map*hMiI%vnTb~<H+<{-UQc FKfQDIh5(x Ac4;5V 6ѻQu1mPM=B-#t9gs +-4T/bOF^ȼU~FcZi-#4LrLbJ џp83.PGJ=e8=`FTzAlrD7*D>ax>0U%޾߷ZB-D%MO;42v[qFE3DS $4dK#p OF^$Be4gd =D6}8(?N60(əZ4+"+8{f2X11`É$Hmpo"DrpFP g#8/a,8uk$ c!.njF k+" &-B]]6̓\8pu_?3aFc@'}DKN`CLbWے9AĚBZofKragP+䮣:$V*"T*'LDg;2"PZu8P- #*> O8sVc]qt2f xZivS…`Z9UZJ)/L:{}MQN2XIƘ<'}MM4 p*̉Ef x=E2I,¹KHK832)[{I#&*cVg(aTaQtA~8}`y$"s]a~z{:OJz0wTz5f99V&^?Z~pS;Ԃ^H7$Bt+!UEZUt2%zy;r;9>Y*v0ho,@]iɡze^|&2&|ۦʷmdf2QqPvll9j@|4 7^OSس؛|s8SIY|YcUɦ\e{f^ټiǽ3k$x5DTӳӝqed4' _I$o9Iq!+U..9Ehl ¨G73];MX~6֥K[pk>!H4/u pK΀l 09JQB9Y'ij6$+ѓeJEa@DN]woO"S3I*q]>nqf*DX'bђ>/85q69KST.K^x71\Bw4RSkJ [̺JO^Tz .fOzoԲ>/tr}$%TY BZ=O |x;:~I@a!qЪyi|o2 w-ۜiX]{w.!-ܜcvnmNGp 2ֻ3Ld*,!jf;.>?;պn8ʦ$w@ˋOfq9&/ץϸK;SsF`2AX#Rs$ fc0eZfVjBܦ0/Xv(Ts?-G5bGwL+Ufr2ijR-V:Et}/ڄ.EIͺ[HZ-ѕqֳ&"&)ֺ0DWs&0T-ص])((*būழDl :A'/W9Z}}]*=}Uywk2 :"傥KU#w7I.G;wgswlpz\,8zƶa-m9+ޯ^='7aY&rq ˻녽yN0#̊!L-ƟY1VTU~&o`W"m3Fi]]Y/'Wk`j><')Q')T< x@!-I&7K#/2ő GAbJ<5EZm<LoR!ֽ3ٹō4OW1S\Ya\ܶ7lYey6z{ \N7 |R;2V1< L=4kXQI+9*7 (7'xs0r_k$VWv"ysV v Y 7(<*#gH衝Sb& ;Q њH>2Wn2gϟ}w+.nw1yqY/.Ճi&59UZ?}3yOO3^{c]Ln?W][NDdld1zlT3̊,J>z]] ˧ sA(OOfЋ3 }A q&fL4R26 mzUΫvj37DZJ3 LwC~:[k[8µϭWcmpzr>Q<#raRB'E+ɴEaoye^^,9eʲl//NЩeY\ʹWt\MCYLn=K{[ZY!lw_Fƛoț/* ;4φn qZ/>VrE#^ߙ 5Jshͱ Bon^\g 8ֽTi5b#FtS>ոZ8c1)vGH=3[)㌷VuvT^80<`>(Ր#t<?< B*Y0Li] tiV. fVlzv4]. {hfm>_r~C%%յ7ncȐS^rN{2L^ q6{H+:/tm *cX*MTEQmGJ8텢CQֽ3BQ8*鞣qPU(j cI}2PTnIEbFr#,.1a(>fC'X^0Uv:Djn<'E0|BP=c=$-1Jh$Z,r%3FJ-LJa-I<]"Σ%TŸƸqu|qu|%S=c\qdǺ"O1iQˑ't#ZsM|q&9V+^VX#[G3E12#~d-P:PxOCHKHSp^[vTx:ŭ]'%X!\dHdئRX$R30ʝ"E&q_5 >cW ÄX'N\8V {ض : eYqi4NLG } bs%o-Ყ8gc@\L}QB,Q5/ƙC۾D,pgHk9g;R ލF>5YKMELl׃^9r2{d*ղ]I8R8qJa_K 0MʕX!c4#yNHnlWlGQeb;?yr|p{9' !TG1IJA{b'\J!Š'9& i1@B!6%&Ku*%%AHҙD(fiVZ'%4 pe,ȍ0L&8ŖB N<5A5J ?f^g!,Xg̦Agg fX ;Pb$$3FWp<|)<#6E~BDk30]-p5 !"v5:{6*wO ft nGʹFԒ1`"S=NA,lяSr]Ks7+ ]vdGB>9{9PiÑ9$n/$"D5Ŷ.| B4y{ d sALԌ `h Euy|-2M9D !r'a2&gm?Y\xPzboeϿ^/sk|302p;E| ׈l*AN.4שT]\[%v'DUbc4jP"N3ed\/LM Wߓ ך4W dヌ{\0ZdP921/$@5`PDI9!{4k7:"'9(D5fEquGN=ZmR%=Ƒ1Ɛ.v&i2+Iq7nJ0#+NW4xJ+A+|Pj/nZ1b<47k Oz伽?d$0)6>^nz7kt'$&\c(jO\tn(y1~0DC9 +==T1TiIݘK#FժVGGQ'RN)> ,(=Q&$7KDqKGF: | |7Vl.lkOΡ왋dNՀTi5*72=(#%Qq)BK9yi:OK=S5i@?/A`KWK_@Ԉ\z^<ȍ\R#13B&S 8'2xN@*c$*;o:*rc޾=9YƙGd&Ȯ 1UYќY(G-,+G,9ϒx{{>Y(<:"W$ˮR2\f\<4N)Ah9pqU12NJ/Bu\1T: +A 2V*M>z)}lGIg(О9DgZ94ȳD :h kŤ1fG".A Aca'ʉL 4<=^ѫ$W TなڠM^#TͮȎregJ$RҞ#0s< $"5-(bAKqFP#bbrӚ}"jFsVV s NzDe$l/G);* ,q~%TgfוoUvg$ Ũr;cآ{8WQ3OE-}aY)bZI+"_z[b.7#jkc͐N=,㦲xhRШ}$2m}ڃzbqm=}mhH{HwW/~1j4BX}rjP/\!ed4|0+ڎ OB6=˯G4͈,fEwoZt0+8y7f}zOȶ| .|L1QHIGd+<Z*ŏ*0?z& 4TZ׀:=r&I3x)x#T#Z5jZeE)m} 8\x#ҙ̺TH=kҞaR~onZ3̞~/5%ts^} 0Y%|(3\}ÎbPӊu1݂6rZ[pnSX7&jntF!bPc:cnvNŢ[{1LetB޸6WP;Mw3FV jLu1퀫SE6qm)!WӲݸXEtkŠtZ{up^[ݦ7n{۔Q}2t?C7F^Ozr!ϡ6rhr9r.J7SXbo fxZOF+AS,%շ>%=@+enOIS m(4=g k1u vډB[,a)$ ڕݻ'wq?E9S6~>;G5?~%?>PVX!ct__+W*Rζ}~sǯn\)'}(N8هdNvI֣Ak$sBh z%!)%tQZ;蜶)0!ُy=zC)E=%n}f4yZW|?n?iB{3a; =+9i3KК_)5c|"#u!0_Xeq S \4j5 q%,N%`RgDZ;%9;A$ὁd"JϦR槖|]h/#Ory{W(:_~6gess93#ޓ:q{rsG9ܯ姬B oSmuvo]9šo+`QE-$#"3AEfȄ>HVIVY9Ӹ[@Z5PtThB9 */VΫ,q1 s 5蘂6奘6d4ccd e5~6syԼt,a j.`5/K bH9÷rEe5%r9`@XjY͋Pq_@=i%)A湭J[,Olf=ڶpMuJvM'na,I2ɑĆ6EP.ϼӳ@VH $hke\dE^)n,8ENݹ$R[VzdFa,a@݊a_~w^R\"Wj;ZL:Wm|^onrqڧMI)yf:2Oh^M1_ F@;rv.8V63S [~m˞5iFn+|e!z2,'sOx&]TvN&;kb^Κ3|yrFlNM`К_Zwy+YxnW~ 1 bl{01~BvHWKHѦ5׋U#U=0ƚ t4Ҫ{ҌUZ$ܶޓ-Iܪ."qێ{G|mz4jloԘGgL35eN:M{ۭϺݪl_+*3#᚛]zh/4-:vq? t9g t6Kikl4F[9!%dKz,dcT().@KߡUȀH6l̖L19Vǜ gSbV^m9 &wE_- "kK02# 01&  X; .0d`82BfRԉD|R0!s8[2L5dή(Ms3Jy YB`QyTJ>;Z@Se!y1΅w R+1P{BdLi%To"~X(:9_ȣÝRl‘I~]I)6nZa)jRNCZZhӍЇ$OB4Py(zPn~GBwӁf )ٍͭ+&̤WOe#fوJW(0e:m:FMCy%lF#4Khx2o_yu$6i\X/o<BcnJ 9p9.u4,*D\oАvI΁I0񳙫sn-^GC[X[2qFh9%# <ɮewtS3Ұn^t%?*TErdZ!DYz47T7X by!gq$ ZP_Ͽןx1ISo^U20Ƒz #2G*״#쪂Ӟ kI sqVۯO8'X7kc|Si6d6;WrfRXB؟jط͠33WKXdZ@UZ @hO aL)Z -巽$&r\Ӏ{ڼ}t Z#̺Wb{?헏T3SVupz9i`қzi9Lu zUiH—+LsH:V3 ѨNkH?%\~(C #a Ơ=X-ߙGL)ER [Ot1`sT#:Pt[& |E5sf a+l;=aFG+\NDW#%&F+'0&2fJ'g1D$RI#TMqJr0R;]>%7ۈ n_F[<ٯW_³ҡ}daݻ'wq?ENU.~1U!5whxť>d&*" /$k8 2bQF XjpCR[6N4!sx.I\:*CU"RBqx'Bk3C-{VsIW6:S,j &Bkh$[Ύ3ܗzZC&:90OͺsA=mOf Owz9n_J۬e申|Jq'YvLJa;uNoV@FhhTe}YYbe cQB)ߞ.tZ h꘲ǫy)zz(/ "S2|dJAy[EP PeJ@bYB&dlH]L} *1Ƥ.~27߆/S;<.AX5m'w9DsϕR|1Hw}q/D,oÍFJE0:aRj#"1Kpre.he+@]/6̉Fϝ]ƑNR 'al7~!163?P.(.sh|wΝE.wsf!Re; {3&e-[E\WmIvȿ촥Wѿ찇Dr0- ʱqs5Iْz1GFU W sŹo2K[k+Œ_fW*-3)a9T[1",fC,>EC 1 Z]K!RY9!!^wk_3jSףJ\#gz&J2- >ό f4=d"z$|A& s$daj(U_|.X@Np',`Y^5$Ri`vRR.FrNuQ+\3wl$o),ZRQZi1`$$ qV@$F ʅK#7 ,H>I…"l$+"q0i1B(՜QXlƨjBH\b;.8Ņ@8#B88\RDn\OiBbǢF%O$%&稹-ׅx FCl3nEV<#u-- Eߑ!H,3;r΅ܑ333Y-j⥴Eâ0t,N&,E)FkPD03,tPrTY(t1#XļUF14CNH{W+S }z (oSZT-QRL!}U!ЛAyc0oO}^G#>=dI6Wx<)ZYoL]z=8~<, _~aƏf p_vů˞&^~985*rY[])21Nw~aA7-$%ֶyi;w;4tB)K%t:Z) >,{tvM'g,}\hw藚Cލ* 7*[D_khMxf*C]lΓ{޵6U3#mĎ-Q4G(1yHo 1ĸ Tz`C~ k^[To5’ȓw2+*A1:ۿڠy?#4Е$XmϠTkJ}w(X6xf0Yzɘ:JM7އG o}Sm>}gŧٓ3#ͻ:P\`F]u.(rV ;f(e(8FrX,hBh;V's:xNcZ=|);X~00afd6LGP0'fRuբT .v w.Is&UZKm RV4߾] $`uXR&5#;Xy5e>-٥Vc-ـ '}QaBlh)ZgNikpGV2SnU,|ygҭ~SI ySƜAʫGi~zr.Q3J-֭e9i ֔oR,Xs7tc/%By7:S&ܕwnFGa6 釯|[dK?Pf9uL ExȢC Vw0eA34mcvcb˴YsT]ש~S B^8]ݦ^wi)>c8aSz0w(n9:y|I?!fK~+aI?>S!-,hrkOZIEG.,9м Ռ狵揗b47\5NXɵlE'{Wk$l_BVƽVe U \;nXͳEߍIנ.9i !`wonxdoV]$gRz7k |vj]LLYڛ_xhyդCS`܅#^kN1W. ^'7 !xfEAֲg_ ˍ^Xrw\ PZR)8\7+Vn(n* bu{VRUmXN#oU c7*'v5\q,nPحĹv{ &=J>CΉl3og+9n~KyOz4VLvRMSF(P"RD4||ɡ[G*/4}v-ŏgg;;ݬF[A{ X_{Wrpqy%մyQ7oҡuJZ}P7]7KvsV9˨;gSǍ!zZ2j=m>zbrot͞9h*7{j%϶_P{;gf! R9#J_//]1ujw`V%v\NOM<;J Ws_ EjЮD״~Y5 EKCtry*X5_kQx^!N *%-_` |98K>$l{Syۅ>c9gU ;yatRjQ?o8E`,ֹMwpw4ߠA&`D| yІ"Db'C)6I}oث5$ K$xE4F 40%˼F [G%xsϧ 77)wYAݎ VᓭiZ*.lڝ?i$D7WRh09`"墰KT:m.3/M.8YC X!YXbG.@dQ99;C45v)urvlz'aLuwYӊq̌X,/QԪ]GҨ?jVVs~[C*Dx$E4g4 'xfX+N\!!#q2*jPlL+M0m皾=ٔ*QɔjI*Q*niCXKxڇ/ #ތrŜ( <-c! @ D%A`H5RZaQSh9 KvJ n"ǸW$zn3DSMdMNj*)R >Ǫ{ G"h ?!1HVg_692 zU49kRh8iIOب˻r~Og(N#Mz> )=z{?SCL(ǁ {˃[ԪFA¾ WoIDI) rp&(ieC݂Zp޷ht%m}* ׬_{_ti+eT ^\"otN`5C'%Y|WIu }MvQ~ D5=}/34jRTr9y6NU ݗ\'_َFj\ڜ TIR5b)# d tYfy~ZHOo F8ƈAAR$RMFxŽ:46 ŕ<Ng$Mu9|e2c앲 :ER\ BL>JC1 '8o5QS11ͭa fKS)\ZIZhSBOcӆ[>ǪLARSFY1V D)ņj ITQ蜰Z!9,.1B(Ԏ3T:cLĵAYJkb9aU3VF~M{]U 8J<̾ʚǥbXUSoIFDO_^gMGwC`K>e_㻫A'>>@[Bod4+ 'x?XDJ"8Q9WL0Z+ P bbb"ykkޞ4 _T   GSecADJʉ1Ud]ڄlz$ߍI`RLgwi^R!3ͪdQs{{5_&1=+PXpP Fe "&|^y!#RI=NRdv6n$"ao!_ `2 $/ 4=ٖN/ _5)K-$E8 2"*VwuW= aO`{s?Z3YBК&ըʒhKrW_Yr>^OӵqM1\!%),0'">9HLe~]Sw0?KA<@~}a5iXA1('q!R.Ij{έw gMZ(Ĕ31,5RY*Q ʑ1`L!(I$悥0ɖcP lN?/Mnl0Y$ÚYSlUg>oJE||,+vGu)ǧ̌&t,F\8fk[X&yZ||r4ZdV-?kKs4?tY9Qxh'ﻻzCRƤ%1 fWLc눳uzϤkG3_>&VC5.]x^w\\= >L\Qbj[͠_2DRLEqŒ0dJ5Ͷ'00fNX ~nXs>{*fjBr)}wSH+1>#ĻqUݛwwkBr=٦H-溶dB}<·fyz,ڞzN7>x毩w>_vR.( d% apk;{Kr 4c uaUc,VxM *F))eSexZl6jc;WV.njT/ơOkTuy (Eklt_gRz*`sYJy9G9~ʛYy(OQu:!ΡK4nhx?4cSۙܰPs؇)1o,8eSL|mmuQ1Pr!cTYKSm=@3BhʒHFNJJXPpTeO5He%L~^P jK`PsP_械92)d!H2PZ$A<&d)|eT&uYMDaQ_)6{My|JM׵^20Pڹs_`)5xw s +\) SBic.)(| ڛ)9e|Șk+S"x  \"0DvVZ2N+z8I%lߟJ */EVδĶ5Yavpλ,8j<I9i#*OD:pιaHK[ng g raLm{TBC==E wI_/wKujo7ԗ2d=4*UCJڮ$DbH_Rڻwл t Mf5gP=kKQ-QioC3QˤNQ.;ŤԡƁOp=i$6ZF/y]6PCfX[{=/|K=oi_VFB4p<XS[iMOHQ BKZ`9Y"\k4k~@8erd$H+ce3E N P10uk~sO VPK M H$ "#y"%qeƹP G$Iٞp8n /hbrqIP";xsg 9R! sܿI- h6[s2Dl%̡G~I_zA\w!#pwxLnp޿Q֊@>Ƒ 0dOOW ~q8C8JRd*z˴eGh5UT_oTm^R5)u= ]慜bO1%AYhH]4l'AS0p. 9}x|tO-:[XcX |L>#] L D%~ M4ʦ8OnK9>#ĻsiٵwT<[hM9.%ʒEo|oBX_\%6' o"Rcs$vvd*.@(pJGۊP\`B-n$z+NglN^ʉa=-8g7[O;5LU:E_g\qʵG>d>|xy6/_ TE|f.tѧ5pϮ P+Ca+Ѧ2 kT/Fר/88=NrK'$>Sj_t St-Gr^*UǙ;iKg=szYggN&RnUO&[KP~=]Z*O2 N(&M@odq߸q8J;t2''Bw"ZnrBp!Qn,o)jٱXkk,vm6y|v>ګx7T,Kqap1u1,9)AToPq '}SNvkt)+Sڽ0oxƠO i^ mwW@?DM0HK?Hb}vb5D]j.cUuF#>97kހTG  pjq:_%w:z hv2eOfz+H2ok !!6Է+حozL}2o30%kun ҚpLNʖC3z`9wק/`e#k?a`\]=ʗU}W^ (_+yyw=3wW u +K7Lu ^|.elaq #n<%wW;2?._^z+`lMR$lta?h%"MWpܽ)1d#LEHy@i|F7]:Kaˤ^*O6N7nxp!i)Nѿ9Eyvo}ػ9E)FK)4*C/P7Ҝ?յݜ"H[0ai}ꮓ!*HMꐲ>bƳ 8s!ʢ<&\qJ.l#O5Х#! tv< M4ƦvNoQ6{~!ȡ>whhh4x^Cn9x?ޒCCmѶ-8v1  9I*#uܼn&5 fXϳg0g\:rAQтjb1dOr;C=:$>/->@:"x(XX_5E7^B^i|{n/s"_-:!oexCu3zNs6t3Z@vNHzcuV3{q5 (pg>v')\{Uø`J0@+ESsJ9ӊb&bDJ)sô Q 34Mb1r$MŬBXPkI3[+OBHHbdu"qB*ҊF1C"S; "$i))I&V3荕L9Je Li0o[$V/n!~~b{ 5&)l`2TbRq}@+ ź f҈=}A Ďru|_A[P !r)ଲA,y'@B0WT bJЖ5.ruǫ|sHj\)(ܲBK b2 9x]Wi43~yح epڵSPq%Hb9*` V]5ZG b(ӈ&)Z^T2k:ziXxc)o|v{>IiiH|e#%};h ,./';+6FŦ-⢋1xx"]XUwuWYXύ@MkY #ԅ"*JyxAJ+혴kݼAճNn`~i=;3(ÔWf.p9 $&Z}4iqiTLO?خE  H)7,̾nv3 C,IgJY6a6"^hjQisun.'cxq8XL>r\Ak兩ΝgoRzԝ{Sӧ`Hfiϒϯ, %g5EAvTQHD]I]2A7:nm.0,p :F?G9L2hHϞ48KN[ӿW~""(`D,F' '>MUl]r>fv$GF:9g'qX|o|zl1n]c[m9e `y{pQSIVR2Ia갬+>=W ^y(;>)zKxb*OB{WJWW`>/;'+TWx}p@~J7mb<31 më~័^MW`X0,adyVQ_e/DBkPzzyA,dyn^Ni80K49}~Rąv!ysLjk =7^[pbԲm>F!LjqB__* :PIh ϫFDS{Y04Flq V-&Gz6|#om9=[/@>d+6OR,ѩ Xmy$&xL.3&Yq9~K# Ǽ1g8 ( ɹΊC")kOUŸq mǨ|.mOsR=oO7o!Q(ꩋYrBC K*ݡK8(8o&Sԃsbmԓ3zg䞟t;])UEe W : %AYTzVymmqV 2$@c' jnSκ l ͵`@❤R/;#Ⅶ.VPyс6Yi+inwA&7>8H#,]t>ۿJb;SлPd"`18Cid4t {es-Ï,Ԫyd\oh`P((=Wt[}PZ0j|C_ 7, փd;_ LNiI%8gw[?R> #zaW7;3BdV)daUM Xk(yMk! X`&Hi99jUy}dݏrɇ郙.}Yp; |z6ok]spQ o I4 `krё$HNBBrp.Za|+l )Y{0[FI8KS OS r(i?O^0^;ta%x|WgKE]!u!X4I:[|=..20[AMd傄@T`Xݎ/Hʠ&&UOihq|f^|l°l;nζ#[lqŁu(|n#Ԫ}@ι#p!:OhOf>I?{'_/ݷʈ` g F.x="1$ČG2!I l,>;k%{_<Ep>Y1EEQy){y@vGnhI[f@Cwhy1Q;8QT0<\5;ѰSc$@B͐jd)[#KM{Cw8=4ȈbxAzہq'7͹ jAOYJNSG,Bj1@FA%X1ilaMSeGܶ`P#dB[0-xiwחŀzRk8N62+dZR)!4@h GI(4r&bw4 JZ۞9=׷{R}&cdE~=-t9q>%*:3Dxfs7q[f9˺5 m׳K7'2. &WM>Ep샦m䒻ݸ'3>nbd9-xVo%Mܸz'{'I&9}s=[$Y"hϬfYr_;UN½}vbiFv>_r*hw֭qG}[UN1EQb6K 6 bI@4l50[+/y b<:p+1%[ 󭣧* t{?2(=h>n44◃ P:9ZBˑqiq .`=ٓK.֐@X0yKL3x/O7/hkKhS , (|,|,O} m)D(/c3NއBAS%*&Yz1i~.7H?~qP0Raikr~cO)%CQCٌ2e˖6Ƅ˄W# Feтqyq7Irc"ڞ?.gqBmɋx1!H8BF1idb8saR=I@O3ǩi+iX9b|u>?x1r#%=WQn"pb~^oT@(P֩,~4f ~0@humܤ3DZ5uxg׳D߲ @`5d!%)>--S^v0/ o1UA: |:>W!OP=QUB~;gO~mjl^ vϾ\v"7+q!h7Sj(^ey ~)xțUpZzk б,`"zŬo!k=bn(ELKql7i/ʡt}Z<>ov گ>$0]2I0OK;$x9s_;0(󂷁!cT~Tq1mtx2]<q'Dke4cӽl 2 yhd^_t UIɣz/)'5DW_[X11 bG1%2)M 6ǐ,f6z4Uj h>Ë􋏺B}j:x2i.jnOEgfIrq˔|.K)ka$k3~!b( ~Ͽ͡\dw1w{McO.Q2g]K"ёVpF g _OUnpUM:omGh=B!.q!ڕ ;xq,3EW⁇ ..sFu02.)ݏwM ĭGG$.j-&0!mW?2#_a$^\VexKN$ Ph`y٨u[ؗw@DMX̃Z'cB&S%Ler}RK}72t9GsFӟD_inߛ_u[WeUپuUoUW/,Ҝ ' 5$ SRB9ND@cSbI TTϮ73݌V4L?5XUQӆnFj@p4wG&kM?Y)4iMƍ AddܢVn)./}/oV*G^9ƫ>+k}T\@gʚH_Ak$;;(zN  j@d"8Ci2+̬OR4h9w9.zq][ k-#REfLrQaFF;(L@ڪZi/_U(DUBɔ~B!yL?%.]vIKrWwIbq>>. $OKY9@d@ݥYA1r9Jfq/?Xruqn=>Ņ%WgZ !`8u)5Ga&%:., @4BR$o!ޒV/lmkvM;nP7AO̮3ƺ ݆ vBOi-O5@"n w4@_Z="2:ᒡY'(ɢ'm0-N,χC ʙ(4`3P!5 E/"#g1yp*B NWC7?\^MQkhf{p{*&mB3TT"yԱHˡyV#$k$.|KF` ZUxⷑr7""K7d7F"ryw/<,N[ZQ%%o{O߿z}|V=- `=~*c?WaйLJ[d+~#MK3^BhgцFԼFe@͟ibP]-mP̵aL&J%TBhȓ nSSfԓӼCbT ٧vb{<}n6Gz=_R@a{a~է|a&h{o>E;M/ 7U0~`+}倡 qqrd6z?TKUY^}>/9<%X[QlxOG hj듗gCĒ3W`M ^p 9|W0ZQfHZh*宾Bhy#K|jqZ25D P﫡AֱؔP!*F!*X)YT]aôDjI=:ңQoyoZpE`8KY.܀6<3Lo#k%Tj}=>}=|`ꖖXYJ$AkX C5[IVUx]kA,-5D£5ۗB@?\V"$8W@mr݈xy T # D7"*\ k4i:oj=KbA6m??G1QȢ!zh>#( gLҊVu=R# L q<_rxdތp宄*w{lH/|AsԔJL^M!Z'O^>,7Mz&R'甓8,D+Hv e!)X4U 0@Дg !T+ai'A N!W+ڻ}599t=lKM??$F(ጉ=N^U! ~'pL*/`LDOv{̀3yܛ27)(|vCiQGo/xhc`O3niՠ#w8ny(Iz]x }EɈ5'3R Bu"*m g[6@Zs ٣=. h ;ɩ姳דSˎQm~j X 3I|vEP1gKF]\Iֹ$ד;!CGS.­w.9L:.F:M{UwnFnsw/7fqͥ 2 ܄JqC! sRcJ;- nwǖ{mD 9!p 6NYk'F;5<=!8rE3W^&qfsv2&|%k|ji>.7Ӿ PԚ}-o-TھcAyЛ0ټ3z y @V) y,H˿? &j4'GW\sۛ,1G1k\IB^):o:ֵ[fvkA}GvAwzک[끖jcH+2ř4od g f8t\ĸ±/ 脽_̩:!R,lYyil]JȒ'쨄dM\Vj)Wˠ`}äCp:@7 g\цE'\2w'CE6x+ܵ .; /gY_,i"dk!` xm<*M 9NbTI*ģ e+Ģ+0@eZAƉy vlGu@3֧R}HGA| kD,0Cc&gyg )qrH qS83,~䎏!").4ISVnQb̴%bwƲ7Bt06Qڬհ2"gUhT`-erv F!/dm,f1Dyt8qR&,C N1ٚ'5'Iϣ9UC")8;4D߉@.PeP! 5 /&z\)~1{v6|ePK&wZ=S߆pHV0uyp4VR8 e.>+Z$I%Ck5{+A/u[iGY#|6É <8xHp0.yaDN{pH,.ruS+W1@}q B(8w0N ( k ` QZto0Oz>8^7S;!'n٧d:OOhdow.6 %NK^?M:  ^>bF&g*xHG@ [O=Eƚ326 fG2\g_8 ÃQ^D=qOq~F/p賆}^0{^{,䲼&Jн砷r:|cX*9KG㢂Sq/t vSoQYy}(U1`5+l/ƢcH3âoʅXi$h4zd6~s6=J)'`Iǖ AL*3]WSKd$MHTmL5Xh㠥4E 0ZHܠh@DrɶC?^7mk^-@t-֘^T<Ӥ|NhWz^c*#; ˣ )=' ət<yq~0~޿BRۖ;ؓm[ö`[73 Y.k/|=:xڊ$n_' љaFDn2R71}ڏ f FUrWT^ŮP6IK>i„9h{5Iʢ'h}1^~ɎW^Gyg(=˹QJt^X}^2lJ?V{|=zlp3QSg~gt,fdtQy[6h6B_;Q7%J> k%J 􇳔w&oKR H)Y{vq~Y5[6뤜0>SnE+3ldY.Iw%5zMv5|S˓eCPr+WTa[f.@Ph5;hl-G(+ʖnIkZ{M?T:ǎqAoX;ږMTnԫ6+wJI FV\"l9\%:L["4RHT|]A2\[˃T6j̭ !"tp,&^d9~5GY y )8z{8zGOJ6YΡc ##RW&90frl'Ǚ WTYNZj I]'~Lr:#gDǶ[fJ8@]u__>c_=#fZ&淋:caL\>֓:`6q!NB| yߕy6:)*'Wx\@Jk2)-k+u/H})980+XJQ;ʤѠZ[+ϩu@T[,=O6XQ\JIyFU` 5U:c) yKA humbT T!27&BWp|: W₷?L*fݪ>*-r y&i->?ӊInr \ 't ֠EoeޝδQɊ@Di1[YՎ??l[,F)Z@FB{z$ǏgHie-R"UD&5*>嚃-W;T(0jҦfvZhfZc2H??. Nk $5:pͶf&OjfՏ3i~pѭS ..g{JAp(='e:o.1g]ҐZ44>wX݄n[5a?g`K >f.fzCйtTٸ*Sƣ̻JQ/6NVɽzcy̒:XłF_DCmeN?F<ɒ䵰?ۻ21Bdb!V^/eFVh8*s]JKXoaeb"hc$F~Q\8Ֆy.S\&ΏӴtӴ '^QJ~8Y<'w+-~*&7wK sSst?5g "WquАz:@CQA}sQ]iu6A1ɹd5qLQϭ M̥It*F'2]UMl4bHǏB:avc 'EnegoЗ5;&%6i?<8VA\9m..j ֬]M uqBeɗ*;.ށE:A=@HRAP\GC "Ԕu4وQ),hZ[ .m'LKVI 7'SŝC| /}^:TLtY8Tw>O_6o$]bu/E@Ѥ?nWmr!nRFlh?!Sz 9Rys=B"4#<\sBZ%p6Lw׳t"n?[-Dt}DW 5n𸬲MC}wmNNٛŅ;]44 wtP^K*zV^E eRΧ ]=|Pwg}mƊPFGTXeBo\|^'& F>rU(!֕QVbDLy>MfA'fPC&&Kŷ_ȌʌѺbza,ɓNV7ҘVw.gJYZ{iY{ΔLdm)򆖂L.|RKh}tsX6^7ˋj>+V +Vn-Pp!}?-*F9P*ˋpRb|rBO~j)^ruak+0BX6YFO΄z Dڹۥ,=ŀP !x#5n8ΕRl4db{sOhq^2d2P WB:þM7I7>y ʑw+(c.SFd;J$dm]Gx>=Kk&5T֗1A=+Mwr"55>8n6ۣ[^H}9){-xIV䜙)49xXKXJAtrrJJYw:GS̍LJ2JrNpC]ZJ- ʕ50bg:(xWSsιޠEZ`kN94VS{JE ^ se&PMo2 r?o'1HkxoGs\ï9ҭ~Xv4%9='W66Q'-#ʹi=r&Z=ߓhĜW9 )yZT_4P^ &AY(QB-;!pt fc*d,=5ԑP)Y8S*#o<Ԕ{WxkR]b@`I̛"8B ,փa&x\T3!BTs\AF%xe˄JI>9ڹw qW(Bvox RCUSf6s=69X{Ɯ)S㥲DYjZPPPeAk@Y|s82(ϕmUceޏo C+OAXM(% [;SRs˴o1ŽkejjM׾lM6{P<*L-i$(pJ+9n\ AV= %ՒktS@L;HݡPpPrHNçnbN"¬L, 9Wljϕ[TBRL)Qyk'$kXL3JT&?cW-yd%p&l~·ӘwCTv,2[Q 2u A8.ʹN7)k4|){Rr]b40@ӻİ2P*vObȏӈI L& Nb>å 9HvLPNℤMƧσc )јwCj;<9挓_nQvƯHRaZAwأ C RwkK4:zj/(h9r W\Z-Z#o՞waLK^7O[hXoæ:6M& c$>pT6k<:,D@Mwޕu,B-gKU/B u 'F<ʖS=C}"63_-]KwWnhUZY~]zOƕҰ`k$ڕ3RPj@sDq%ŒsADڴqH$Q5L$3I$Mv\ eg,H@ic`"*+_8$Tm$.r ^;! a7i4}ֲ֚`9_0)ɂ!eN"g<\ a^&ڟF/G>?\&wphHvq(8Q0E.G.+8lPu\̤#\ = ):Rl۔ufHpe-:R2lzF&VbϯS7u:|u C~//){ g1G;vY/һrQs/7o aEz-143 yGk3 {þ.,P/=ؗyYDzJ9)&k|q"uc)OZcBNK9ˏ4Z?Qھ&aoڧ'ֹǮUJ/DmhHwk߲T^?}p)1R{.Ӿ|ro|s&ïHU7U UjݱMw>xґ!hg\ ZǞ2։"ws+wO-5ݺ(φqͧN,L~gow`C\Lu)a{?j DpG lϊJ~)ڷ<7ДҿJ6Rah';ϼ,)/ Q!d49J v{@' FC6ZJ!-5lf./aCѽmuSC Zz˔_@ЪWh]oO; O`t]z!Xk0Oq(b OQȘ\;INl/~k,ﮒ!U# ]].jYI;@Q6qq[7g'e;#/u@eȣ$s"kA(䖁(2,8j)Q&j$NP2:L- I@eAadQP*Q W u: SD4w jƄ1(*hAmg%}5j @+|-K;Y9e}@^蕦'$:MIԖpԐ ёBˠEU —23ۏXbW KJ.uA|/Ca'Gq׿C0r1˶PNeKY0ɹ[ g-no{yRKQe'*MɄd45ʥʉd5oطOցx2| !Y}c ;5FX$Rv׼a nJ|b׈|`Sr#Q ox$xt sSz٠rFH)@x7A<E(׬A!˒+4}7Aj59 DC-PDhb/_1/#HU.³q\bQ{Byy&3H*IvxN3&CyȱsJ0ꍛ ?fpg+;H~Nɰk}Rj={5޳fkukJjGh t,O^mPd3nt2 ժTQZdj-KuKq7QHRX)QZ2)F/9Rҙ98`#阀e։Y[T4v`^g2BK.tg_pda2Pu8XPKdvZ;5dvPk.QQ;E MS5>Xm4@8koMZ"rѮb[>ܯg>nBnZ,.w|q(Q Jg}bh(}}H-K[σy}ڮ#~y!myyMQ t<Pdڪ-~_'I/'$|XSƕ Ȃ{t]ZD>)ICrS(}u3O A .xq2LO""si!QWIl؆ :C\WHbke"Շ.>%#vHI0+O(V2#GeZu?O';J&W0VU[~;j-nr}p.rH:FD4}qLv/˺VB':Bz*"p"ڼ/]/T jg|!VӍxvԷ}bLs*fX:YUkՊ<^[LP曽̩H3J~]:7]B͏#oaLAt+?r.*[t 6h@3eF60^rf%+>a>9LukY;P]~WJN#fﮗhOو$Ee$)J4L?h F!&LutP@*.iIxRL4T6^Whf[U4~vV97ԭ׭"S \](1\vt1bW#lPm>u߫rӚ+{=B"kװrض YGuN~7T-Ъ>XNҢp +ꆕ.0FjX)+(3ܴ978Q;.3RZ8^;¾[;n W/4uzhsRH}ɘd}X(w;8z̰oZ$r[6 V\9?jXWJ4+khӊ86Nޖ|ss=xdڮ RxzۺiO A .TYW* kqnY7!96533XcfZWxnïWZ14+W 'a z~a4 Jf 4(9^wW&<5zpx+G'jޕ蹧Eakx&AzQ^V7S hgLOunrq{I8=n vی;chC⿬.ɮ\X),"ݯL(ys%rR е~Ƒʦd{ٓ(ߞ}*wtחv?^-nanwˋ'gry +=c5ښ7k-Ĺ:[OM O_ "u\Gs?\ alm>wy#oKpq;w%Ѵ:UHaAƆWFϢv7?1?\-IiBo\dZf4ecjGu^gRȋl-kްZx%|ƴ䭼[%&5XA{ ّ*ht}9 ΂1dFų2)GE=Qk0# W#cDTə'29x$rP+dE$i`b "K\QPjÐ"VV!1}eRO^#,'B&&s Q4hdh3(ǵS!ʎg2Yi&<:&V!_ZԳ`-Wd @CIh-=SB%Gk|!R*T=yG CHqRQLbVPف`1 Ycs!bfjEySϓ?:hXD"{'ahފHJH{r241\g!)BM郎K% _О<E]*SXp QG2[=Z@1n"$ 5sq **G䙘s2$R.75f{ԁ*H#(I]Y"ifqBdi񑅖dqd$NQ"j^p;$8Dau^:>!"0qػLb4R%^J%ɕ.FWUC.VQ* p>&-tHj-f>##tl@Ց)$Jld6H)!/NT$z=T~N՟[SŘ>R%8K7޵q$E[0կlrr0i+$@J"I [&G3ꚪc-\$/laBUZ=plV; _&T 3΁H<2`E5Y[6ډ' B&NuJd6qt)E?ѣ E ҝP{d~*UlzN2.\~s93W3{7RBo!VV(V6IN.'ɔ%8,Tl>Ep,jcFܩ_vz|k7g>\v"xI%D"ΎfyHntʩpCge!C{yťCd 1$F%#@N~RU} ^ i`F8%>y\]qpͭ}`=)qѵy4°$0*_86-%PqR¾μ8iLpbJ*ASb }fE yi|+2rټ"Uu,V[)m^U_\?@Ч9*o柊82vw,pho)i*,k8CЫwԃ3)i]ɛl+D.c,Ou:lf9խV̼6bMqCngB"[6[fVB !?m{ѫfs&fҽ\"/X]ťzh6O7vͺtJA;d> _: /{]Q:#&ʂ6J# u=N |GU(9ΞʕWÀW a`WᑕֺQ99.zlB&CinsÕbJKp ° J,ȼs 9=(DEAM UDp_p= #AFӎgHP#Ȑ(2}A@{;bq$n"+(PKo)A wP ȗZSZ-KaŋK4v~P\@9hjn)hA8wBa_r>oT__[ Aͯ5% pIwK.>>,ɍ+JN?4w㛥#,XU J$AFMY"Fj|4I‰A ޿onwSk_N9pVљdl4[{¼ kt\UYnBT3\r)L*x'{_i_,>]f_o6g,y]>Zof1{Hُ1H& ~Ev xxؓdn";_u}AlJc9EEgH/xh&ku"Ci'L`L*EPόVVMT`Nc:_;;lYrvدaB+^ YQK* *8@"贌Pќ] *gp” ۝jɸ:9V,.6-:G>2cC,.1N"rƭGm-M@F!\M7cDyg½"0ՃOq S<%"ܓ(Ie \ UI4ڀRaԠ➵N\[V$ E('HUEΉ5ArCxf?ZtZE`S4tYHK=R} n+9C'yM(-ôe Qq.J tYY}֒l~e\/ɔ&ij&Z )ZT?G-m7E<BBa a:6Bͽn9Ce*m"Tw3ڌ#Ќ9ѱ7h!hfRQKO3*BД"T:Bd&l;㘕a+Ybh22N ld.L$=(}D"!2aWvoE֌@5$5j՟G\#x9!Q(2$plC+">DJԔ ǷYE5q(%q抪Qj)V`ZHW"FRaTPitaYI_%wnCrEȠbƩ 91eh!љ/<7ξKH(V\ ϕH 8'|`>elhXSd@WJ_ LQ/0_p!lOr )V%B f_rk&4.ͧeOCZh2zi׹?6@Q GJH50ˡH'zxYd\t!)(,+_R O@x **(poU۱ >AXkBOAIF @<֣z}L1s2OHENM;A dlPܮۉE[t.\գ>q4n;mQGyCw&Ő˫d 9J@dm(߃5ڀضN20 Dqbz#\۹7;:$>Q<͠S,j)ۉ@ rΚxEC3as̫-E,n[T3r-A)C8Q&zwEs+{7tZɯwTSWŸ:ɧoo.>/vT9ih SӑLAg,) #߄EQS5E>P>;eTX۫p{7 "y>ذyB!6T<BqcSmR<=G^XIYǴ}TۤZHg+}VZp1!H9q\37ec|o}"VkGX&Cw1 QPpDŴ\dF;l}EC!AndAL1>Hd1#FGiD2(#PHH6BaBqƃ:L_P|pMI,cZ }L 3O1%!DH4A$nTHL6!9j$k8z6HJKRrT # m #N/I %&b;İtZZ$:{P x!e V#!,(HJ(<28Zxpi5 ggus?gW%*xLF1:di&$E*ѫԥ[ J8B BҒb]| >DBC Rd1ր`~ڤZR33DN OM9"þWAS*@> ` qV!_8Aap4gnF` \[mgX} QRu"y K Ǿ;5z.i){Nx-Sy!?)MOYQ4v\_8ļ+r6+aB,nzZ^ Q<4C]\$cVeDp}lvr=;Uj[=z'MHӞNr1֮େ}eiRj-Mz 4bmGI@"w cü0,]-.2GKQ=(DpOir4Ȏ5 ʁi}H..=,V{*]2^[ P5fzY{mÀC|ǒ9xɁbnv\ LH.OfMk?Y4,QKpHO%uSHX\~4FK% /qۘRK)(J4I[K:~SNFKW >Æc?šHħ0qVVJ>7`d8n^дq428b߸;cTln*ip:)}E"uvњ:[kؼ;PxyݛĉMuNC5@ȆX]S҅`f_ڄ }ȾuNah (6͠MsCngiw';%}=`ZɄNo=0s=_fN-έh1LZNbJwZ%72 4:fE } vsO!JЏ>Zv?\}/AnU $ٟ'69GGw*ӌ=s^ˬR!,ÙbA~?N) UkC>D?w F׉dŸ? uwQkǿ =;4?|=GdKⰋ=lΙy>|}o~:\0 ?j/Y)QG呶3? ?7_ʼn~(&Cq0{5/k91']$DM)"|1:A%E+dWeRfPw1Mn"2G4I^A|7^<srM\ RlvS#56x%fHc>{(!_l%~Go?`Un`mq@\pӗ"t>,ף2Nd_pu}^h)fdDP ב"D:3pPfBz"/YTSo o@+"Ti%BjƛTA;}H19+`j1Ԁ<8RMO e MCU[d n?:zdlq-Y^yM-]ZAQA{OZ𶵵[S/G60=+ /I}w$][YiJH:o_D/Ȥ;]{MR\vEz.$ݡk5ڛ6ސeʅ5|⺺ꃷH?\_նvHg jplqc}|_UPct:}Ӊ 6x[1[ ;J-Rp ͙2/T ;<0(u(( & H.*|xARtbL>t]U`cSP   }=ogw&!Sy2A~xWր ݑg!sҼk49.SNw2v% M-6vu/ǝbZ8w JiCfXB>;:l%`vnۉ6 s))(=Ι&pF[=䍚Ȭxv/LD#zNȻL6 CKp#I&W(&9¥%ڕ-89Bk].9CtrO8WrlrY#!n9ԑ1tеaԒн rL}0]vnX =8|tiV x* sDf6+s罛jR?D:GZMQNF fda0SBվCM qܽe-)  p A(]Y \8&/Wʱa*=U^xN_7eBΝ!Ul("eC:N0Jq\~XSm MW+)PP$.>zGQN`xd HBg.erDy ƙ˽^ኀVJ{] )B)IpN$w̪_[JuJB|@$ 9'^Y&.R)%kJ $(0f">fo~oaji?~)]fӳ1mv+#@c8>?614m*P M_V[טڃt6v*z뾮LVFGwIf mQTUUnp߸V1:BCf 8X# {܎\ H,}&۟qd Ugj92ܾ<:Cgby7L-gThvcggb4LKtc, |XFVټm>NB/ eD!&g8&p&W2"y=Z6ʸmKv qRL|Q"m1n_)j.XKr#F3no?][G˸ًqCS')L堝ɑ.2- Ѳ:*8z/gjF%x794bE߿$ݡܬ#z\Mbnx)<x4ĎNl'ShJ΍3s yMPGy=B3Ir*<&3Zy&E9˜ޱy ߄^;بfCm|B}*S i">yN؂s8-+1*%gǷR]z (EJ[c .DXˏ($;Z BWnwHb}yW4׮7N2@\ȌڮFѿxu%H"KX2XX2Yy}74ұuUh+W[`TQԕ#}|שg|ӁH=3̾9s RmQsÎ:א'x6 5LS44#GuCʧ9S ȱ6ȋSA #2rlYp fx&TzTRGaB(IF-`H%KQ¸ߝGBVB-r(r$UA%T 'f8d2u8KI4[_&e^)6$8ʴ8>{RĞ\'[p&nC`{EŹ0R]-FѹBX<{#dQMٿ{E(1/Rtbd$!RV8(NN 4WQŰJ. ku( *-et͢TBͤ ^lV0}[ܑ4sY9:CM>8l[t1ZK>+c'1'c5% NQfpU4R:pFl6I<_&3 vLآ !R-xee@zp{و6]æ6-lhU]t+1_=}kIh _K8 3xgW ڊB6;h !K@Kqt. BuLӍk^=Θ/0zZGYJ83h]f*[*XFW^ Ø~U}өznkVڷ { +ݑ {rMwxzȑUޘhT k=Pat 1hhjPimd{v̱Q#) !rA ̧Ae!u^ϠQ  :4p0^"gB&^Od/ <3DzvSGIx&0\$z{QQs~dv=<A̐;5k[&mL,y?T{y+Qy`)(mIe|7塉n|^5X Q N.0}8Ů`UKWg# ]!dR-G܈$O 8Gr,Ih/FPCХ*LOv^ICj\PC.', 'gU!;Ef%+hBT>\I(7h%oF- MnJwQw)|5&WeOM=cƦ4}'74xVis*\8bb~_Ng L/|}vR(䣮ǿ3 Z z;‚c_SyHQ?S5fz%H3AHK[ 5Z~Ĕk*g5Y 'x²g?՜:z 'ɐ{6 FJz3NGsza'UnG1ɸS:AxD9|wx`б]Ş"AJ}X|$Ay'>+LgJh o$ Fko \prepj%IMY#yz;(*%:l7r;ϘeʤSA#W C?H/4v]H:ۛͭVFG~-/ݏ .EwlRO &QL6/&ti$EEBg⧸EE':NXTT ^QQbEEUgtR9m'[?sQ1,0LNpFueTp8F$OEFRp+IĢovֿ{/b<p'˵dJurJȫRöw!]bsԻbkJuv}]Kr OQ84K'^xzj)#]hRP9p[^YNGϛs>\~rVywQe ^3+XΩTD'hyǷT A'?w=xyw`j?)ד)_л n]-mV~kZ9k6=lq{kNRP1` s埋T-d6 '3kƒ['{QُnmEi7MOG? }n8EÿżSr=>"RUS8sVd:+sA#= yxmPeEx=OIx5Yo}=o'iݘtJ{lkN_} IFóUZ TyGSx"bT;6/qǎc$:YYq@݉Z1n8zQ{4䴳xܑR3'qʥБ(u{)gF))9RL@yWoo 2NJ=QO6LپI&``#pFG M'4(,sKmS"C #TY 6g"+r2y H^1mi KeFs3KRKJ0T`rxN2/IbuANfʅ $&)?(wZ~@Ro@j*5cv qc:Lou? k>S801 $\4(m|foMm]<>P+ ͕.nAtO8 x}8T jFZ1TOCViU-ll~*Ѕy{-b:_ե_$. +=2"R`2'XJrEY22Ǯ+͇+44 |w?6]ܤ?bTweS}x?UCR*ɭ7Uvj+IY)#.k q5sR s˃a;' ! [#cO!{kϱt~THWs2Ú7k0v:Dt$ P}VckY[{ _# $U4y" DN3IK8F͒U.-&"q+eɩ}6D*KN,5Nɡ$:wPWi߶ۃQ+9R:c+x.\}bd^Cj^اNs{e x'w/U'ܡXo)BЖ(ni ϡZ3.eJTɸq/0ńUd ͷ~{OJl%L>',.dn y\޽(Έ 'f]Mdۍ?ŹϿzo.6-ȂgxɴW+W#>p}xڳA,L Ur TEɉaFR؟Õcbls!+>i#|ko 2tzt:+Yp5WY5F~zbc7V腠Ml.nnX= ] ܏Vscpz"zQ19 Sq2U,[7ZÊS(d!(HXCyU, gJ8-5%,+dM2waj'W[+pz؉UGΞ]W)9_MTvW74:(o=ŃαnU?HB`$ Ff^[£3z-!PI^鲂sm!yPs(U bU,(m.!+#@ K8q=K3hd:τ0ea STn2Vh :4@J.ehK$#_{uՇ -奝榰.^|r+!$Bwǭr$ HOwPA5#${JҞSF?&}M nMUmxD;p#NÍuQ]\e7ěvI*3@(eJF1a@g(irq. p%Iax@oE٫ȃ|m>wh=3 u~X?dEI2dp :R •1+$U.2턡{T䯓mk6Wp@9DW7`WUkiϛ϶̂ Sen8Xj R(ײ3k egZJJf)19T !iDX󥇛GC-w~hcboF촁0,e0 Ҹ-;@4Jr(Һ)O#ZQ^ȽТ82G*|DDQ3^ ]BLQSH83K( 6rrCS[!5~Ʃjn {::l?.Ɨq|̱R"DfZ2Twn.E{`2x|~| R73mfgDDD)P'S8)6H7ASOoER fཕ+{+ PDs[} Z2`ocӂQ Rfe3RRF|hAi8|967Sx cলP3DM+u5yk=2@`ǍI^vhɼ(m{ L7~Ӯ `J*LGaT)S`L$%鑥fOvWc0l襓0 xIhK&Ӭ՘#QeR ^@ID˰(eN,% ?g2K2CCG-js ĕ^ 艢lb_< l CK ĮS…ǝg" GIZRfIH$Q rT,KI2OLvS|m~jnPH)u$"-xDPĎ (xNH@YeZad3#ϘE)i"$T XP&SHco O iDRA$[Eq|Y4;fwfZGwj?;NëtZ4N|-&Z0Eb[گplx #wd-> :#BTst? -#ٓ'CEm "okC9cRIqX~SiF}tPU#TBQۛ'pf!kc:1 Rw#F͞:^3Dg[$wwn"h,~/vɷFQy\vfno߿/Wk-az\&xgӔuMJ+vl?F՝if1vV5;qDhʯwXŝ NQh3|ٻ%Y~ŏ>D1)8ӟRFےh$ XQ 经Gs fA]n+wruko b4$xvAu'yn;GsqQ{] ;/LlKI$GI(I JWqQٜJĶV?[[8ɯ{kTSr25SMu媴Qx{2 ߬_ ogC8y[:W)i )=C9o)lQu"?27D$)HAS@2 <$<1&gL1& 䦀MF4@_ɁGmga+QՒ( h!q??C2R5M@ 83؁Q0J*g?(9;>1:-MEAl3E g{v-a#=KDxFUyyjVOxw`6l5'_뎳J4)&IҾZ~E.<^RGmڂ&6ǽΩ^.ش2G]p ʄ}J)q8x ־3 Znc`=j/CQf&51ӥ}D"dw>K1 /+[y=*=|#I(~sEBn{?SsN7綯>{iЦ?/˺w*6#ՇqGW{}SqH=#,wW5SpNq[a m8930shQ΀ygsf>1r*;y\\}Q3BCQӺhz߇JUZ\Cݫws1.uViA/w?Vy:_^[S0..]1?[ 8륩>}6><+G- jVZ;1qX>mYE9sW.$+% \ZqևXH#q g ɉԇ j}nN;|pڗIrGC^8y |(B1pawH!&(M7qn18䅳.O1Ǻ̸FY;X҉ZNcw9ǎ]xNtF̈W% +rs#&'9 mM+@TZ*gci3NQ p-&Rc}4-^~jѡu]J Q% عd0?D|ܙP;7Ͷ O~bqڷBIjX"䬲 lM P#Jh Иdlﲚ~1yT{@ke߉D{+)4Vo%*I$bGJZCw!"'6yC eWI0Z[8:e v(x6+,V;x:+&)tz xpl[woBjO 9_YsȮBy))f4S5(D"3Mț\mhܧ.nl-YrѱiZ#74#h߳v\~s9OXRcV5s!/O!eQUxSC*2PWFZ* 򆈚ZW@%)h6>\)H2\rB5%u.-@(L.<6JŔJe SYF(UMJQNUH )u"USŤ̴e3yB sb'Rl7 S8`+־ Y(e<4S~P("7) S"1T#|Fv@j-ʔ*KvsewX5HG%XZ%w<LAbz{qj`Z' ۽]s#c{sd_Mw<)nn7HVIhkrN>;7ؗ?]>]Ajo%ڨ_{[h0oV mWdOvt{d^ K&ӻrlh'!;F=1*AIETMe OImC=7zO\4uZKdrf(.EݏX (ft' cJoFQ{љfOٙ` A3HvPBq O8kxB(<:)/̥^@ٝ=9k(:T.-}"\163a^N7'fhogpr#٘Ϝ9!d s/>8S?jiW||i\|fuwTlF3~Kӑ"Q"^JJ>`s+"ހhRjP9D5%Q`б\aaz Wpд41ˍg)A5+A4:7S6#֍x\b*p׊}:6eDr\Of\zDY$,RW"a?(Ji'M=:1CM޾=JGaoңЬ6*YzjMLTi;6ǘ0.]s2ߢ Bri2||'5.?mH"w"N;)q'ϺbJ5oh=(*{- %+-%Klt)4;N~ݫh"5qDF'D93@RNmn ?:!(IIFEJTf S21Ld]LR4'ex J2T8\uBc7H4'BqE<ܐ!,ad\R<ц3͙)l?fOvg#툡kj4Fjf0*9 Q0"SJ#"@&E۔NRIyq `j5O}5zAm]UjF!̸fR!:tg'qECVP0`Aia^D٪$52Fz\KGmN\ڼuҍ @JL_oZ ɕ״J A58xf5ׄ$xMF<2B|_g6mqF5!G17xΡ. [ $z$zO+Id!-uz@*[7Χ0sDܘ>+I0||ucyĂ)\;L(Ɨq;?A)9gzL)zFִW?S\vuZͽP Nx)KݽSyR\=(*0 =R+amԞh NcaD> co=1d$nO~ 18`Qy8*FApG5 |觤%zq]p֊@Ʀ_'8=7 ;Ajo{ ~Ԝz!Xm?Tr~w3YO+-)`}F/w?VEbuMxnOXaOV:=Ǯ>}Ƕ}x~W,JY,GYꖶVUnX>msFQpd=+VbY#C^8T=|tRAb:nDZ&C[ y,SƌH;nF!B1pawH oKSӟt OCpn18䅳^8^5C}Xby.Oh)'c*D\q0؁gk~K?$^Mdh%JTG1V+W~^7giz,XH2RR, 0?MDʼp6LTOb0ps[,kpfB3CYѼ#̱N ?OmkfDgTڿݡ1_mX\' }ⷫ{b_tQer=r٧j}jVq,ZIei')O97d;;}Ȗ#Ifw>çϝ?=ZTJ}sP/D-pwƒ<!*f`FklTI#+|9v Ɔ<{q޳’˞CQO /4lAS̪̬LA0`Y2A4 *;yqi2$"M + y6^"԰˂} v!yܢ%jRZP)=k)Rx NCJQ}jY.wsjN6vR,NcF :)=G)庚rW%IH6oV\􌥔_[\AJsovRꛧT+ЮR*ڜQSwRh<$b3ă)%gܧ(J?sBLF,q X[m1'fK)dI0q"=PƱRo!ܑ supD4>tG8 <7ݡC?*`Wm ?eE~ʶ4}OV?+%o |&Gqs:Ud=/)P祒RR`W.kE2 @grW@z ~{)sް{X4Se&(߅|rEVc[xq3~rS{v8>+tpEr)[) [rFvoRTtˉHL`}O\\i nӿxHᛦr7 ;qCBSXdddz|vPzp)\(kW=w䚔qp<3UX'˼dfPyQP1j'|&T>%ԞK`%eK#G%GGF䯓i@n-)sSіA Saz. k$1t %/l86k4nxͷ*$P%A|+# Upͽ?4b"SJnx7&}S!~] JWШ2U swv. 6JDg; ߄~*rxT xK i:ltpN vt/BTC^5ksq! Ax 5ƍZ(xK3Iy\\9xV* !}RJ`/Uk- i|sŅ$+t#U.*og1 ]}1'1Xґg1YiFcKq 88O0a>0i\89O6 sQkbH8qHW*L0VkU5)h^*R}NB/syٜj+@-gOtay?4mcʔ`oh1F6ӖʯG].3J */OwZ;Ofм/ EC,~ݺ{'5#%WO6uv?UsݵJJ5N*'ErBKv ZYj%'yw۠KܩRYMJ `-T^KItN Ն_W~jJp|RZ ܌j-K;EJOJEWaTTT=ZzYK[J &{'!ۨyJRC̥TWR֌ksp4pRv=úc'?i~>$MR'a 8S ~ۻ8Z; fmxc!x*+ ݏ_eG5 '3N@Vn'!Mj6ךsֹ7W׭~ts&{`E׽.Ρ|Qk϶[F _Mn —W5`xbZM1_MaWWsެnVq!IpJ$ OW\"Cq KxM- s&0zaQzJےQ2HjVDg}w.7#aȬ Ks@̥ `hkkE\YXj:؛b^ޟ͔0sF0馹uLVlKY)?F+dr=zÐ[Ǚ澼+`Dy{˻d#ش?Nk\d>ݞ]v9 Hi/1I&o\Q"I Ie#Aɣ z'\l.[[[ज़iw1+|r }t٢ln;?'`s'tGc0[Vy? j0fg2ܳ_:`, !*- AH*/|Vp2)^tB7MyEI6 >!-)`IH gb,:ÿ܆_[LpC'Ro}y qW$-dBAq㚂]VDϣwlV)I0dbf43R}d%`JU$(`[2 dqKXϮ)/i]㑭Fnv =0+47ag4'ySFX៌,r5Ҫ֢ zw%f;y<9ًoUzjBDM2Hj"֦8CV-=faJd*ʼn vbw0$2&`@RQ1Vײoߢ,_z֭[jGŀ&f+|7?|掵ȵ8!m)+'(2zq""xڴ2FLF/Z;! [ɻ D`X FLwC}]{fyA-q kםZҙ,;`,srYu CM3|~#MO ~忕C džgP(&01 .Xjē04r+ZbF*_ыAZ>HM^D\P`0 {i iJؔzVhY#ّz/ ,9F#wnCU@ʀ0Ӏ0 aOԊ0/ 7+BkFd%w%q =@{OnoDZzK&Z`[,Pe& `ؿߥ9ɧq"yxY "H$k](P;w`uN=1HQ9abt$Kg)E{̭1Wǝ؋B(4Rnod{E|Iy퍆Sa"jk KS8ldZhx`)a:MS=qfb_D0쌰Zs3EƘ Ich!Li*penza >PŹ ĸ`1I0{JR3ϙKfw<79lxq$7)JQ&m'X*$gJ?'׷@(C@ʍ()[M+k`27nuuό+gTv3o26sqS\k,?rGyVwπ2GTs2 G%WC]dC^w^a3#UJТr1e)煅KĢ&A 0{V ̴O6mL޵6ndٿb̎Y``f"bѢMQ2)o%YlUEyڝˤі(LU}n&>I˝jA,qd]qD̩|I q)vivx}2"SB5xыb= ]/}JjK2 MPf$cyk_nAEqI=:ߐMOmYp)IYg0҈S&T+ BI+O(MMQ5w_Ӥ Yɰr~ H&dVWY̼F7^MTg*3*W2O[Bc,u6(QQH21R9ΊS%gmIy#_f~_w@ce;هk%,@ĝoSE1/# K>&AA[RhE"na_fl>RX;N{IYvm8er#)!,)IY`%eý!R#CP*p6тdLd9ȞWm<6(QA  ID=ػ}T KpNmƴeI̝D y^X5rVi.Ȍ0Tt)kԟV12)54CE-TJV&IS%ԭh~nN%7[ ś,ЗIMq$˛Y>vS':g]HcO ֚.W1)?_oW?vWl_/&c=[_ò~rcwvٴ{Ecř_.0j.>lT~'Iv.ԁ8:STUN~ {Lilm qLv2g3UIf~ם~8=} ӴfϻOJ4ղt:w6c_43smS\&󹾃yo,qmƾJRN};H ,Li7l<va_4M VymuAIyͥdQLWE;(^u'y)Yxn>tUQT@Osjjqc2ʇ~M9_<Uo1x= 1'~:>2si8 =I%Б4ʮ\vkntY-z~a8(9Bzv[:[k;#lx[2ufNڛhlU᤽5O8{gr:@3cfYl4$̲Z!2HtQƤBPt ,5h ~:Srgh@Yr:5~KHT6/gi1)CCRw6fms[3EIJ+)swoatgs)1S21>2_B~'ɧhxOg,8oh0Av~oIh.1Q$[rP0rOW92'QPm[ fV9{xy0l,s^R^^ɪ`D0Sj@lvF׊sszϧmi;6GskF+Ód&b﷥Ϙ2yݺ\k*u9A[ ",mlm]L8)Dy6JvR8<4gN Lc2![XG!\DפU^>+mGTؒοgC؛"lө'J$'b>neVz6jKgeV‰+v)*ADY+ +AR0[D85.J|מ{:MD:/^mS‰_:pVݵtS ׾gЇAVQN~Dد$$֗0 =We $9'y&ֲn#'P6W֡fri]\x#!My-c@*Lꈎ^smLIW{ZwZ (Jya9HŜ]s𓒓_c.#e܅sO4Y?wr[ P %Ԩ?`'QyNM7ݱ'B04ƃB'tӴbֽvV2顝feIhl 'uHd:-ؖVoK:.&g=gxucYfǡC7X=5ٲC UYvdƵ"b-K dmz\+"9۶˖bFzb nrEu٦J}y6.-@tumGpweϓJ_\4uɡdWXkAojG V)ʮguvaUhwa|i{' stiGI2v\ 42 pMT8{=@8yuصwaQƫ J s5zߟ>R\sxRor&wW f'U+'x6ymI ƀrSnvٱ?x̽W&EϧI9OANkUI^0cl(ը_<2 Tms ~ퟋKKٰ}xES,TRHj $6u1' !D9c sˣs^K5rwѭd̑.[x\8/ pCUW!͉ȊDvSB5/*cM4,FqɫYoj w0}* DEb+Ia0k:s(RtBx zvtMpe=q<9-ȶaGd-9:*>,Ylӻ(}֫+ZP8W{uB @8m?q hZ[> Fu`P)mc9lL5s.CP`M0쥵^x0h S],x n|Wq (١非uyH0z_Ku: (WCt<$xtQ$c$wBBufk.fZ 0$bXSdh*12`lM8z++]18^\SP&)cഹ~[ > W3m~9OYuY2U`^ "hgii/7wdZ;v̅ڟ׳ec@8jSgv$}S_U}buU˪ZfoDIb$Xhᅻ# @ ySR S;uVe :TTaAŢf%%R.^ڍOcWn|Yb)?ȊHj5XuPֻ=]U<*jsuTgڼhh9LUf6QMzf]o&:JvqJ]=zqBUnbPtmnSy&a}GZ fdd]:>`R@ScݤǓG0Qp:>t5K۹4GޡNNeb<$pk7|Ļ @FǎL?( ONƮEjE&Ͳr1Y}_(|Xau||lsP֢wycӴy e>Iw.+YdSh9Δ"r8gg/ Ą6M]7CLT Kɒ+EA;aPxtV+&$#*x|wv%>(YdIcBkv%8|&|=kpBF<{okpuL*O>(L`YT8TImwaG]7U $5zN儯!ELR\Q}̌ytgOK**&/ݧ7B:q( Vvq&oj +Keu,B˪1fmI dcC1|aPRWC"]V rc`j#IG7/Ε~ƂvF(Y{^-/v! $k2kٱW,kg)뇇 3WlP3,C7+z  s*͊ Z&G5u)uq@:nf;0HdhG <J p`ī${L8c-Ed) #۝/?R ?lZ 4uKo@U ;o^@p{F-gZ7BSr- .M{A͘5dgUSDˏ1)iX|Hݿꮪw|Ywռtj,u ,A]2A`6,4 ď4#{߿N6|< ߭nkgK'.0*}@.|O3|C yg`f}>J8tqNRnox'ix;Y Ù.V.&v)b&]JM&t+Ve2wN4ڜJ m0ć֥?au[ QKV]cD}'cz@p2U5?._qr!hv5 rZ!X >SPӺA-B,);"E0# hrq v=`Cq\ Q'=K66qBY| ف=׺-n\􌑛ҾO =wg{ୢҲ0q!E<-dLx }9}!RD*}RS"<9# nҕi}[vb|WOyl QVrZ,-MW;IɆF?R&EJtOcu'˞>?;(վe;P'"УyրnPSh:|aRoˋ̰y{[Tlf;Mh㱋QY3pp2Ɂf崟YwKG\gً7>+q[kӼO:~c(SY#qy%H[R#uMYq 7%+\֌!&}q5r-8|Q7!w0{Si3sq6< z -J\zmm&LGUob$sQ[%i6X^5爍iݩys]U7t:Fxթ\.BIfjp¸Oi?A=f:݃k:1${TyvݻŗX6,]p|?vkA *&YǨ^ػdݦrrz]5u뱭9 وŽ)XƸ^Q (?]{/Y!ѻ{RkTT{WtR\=,9 Cn" ݾ ȯ(_5:\!$;L639g]"0].,8f.^$3#d՞_Xsl¾Mslhİ^ SM4$ gn59uUj3wzda*:>3SYr9EhPs7ϱy!{lVS/]HQa[k hB; !QuIaUMNwSf<bZv'\97FD)}1 񞶡$ٯ]òm0LNy5zZA+ VTAӝ:Ǘ ŸQXӝ#/Y*U)P.6 ŧp-8PُG">"<۳e=t+ :p;w| ^)۶.sNfzH"o05kLĉYIJ9b/"<Ƞwڙuڭyڻeyy\Qg+o'O;4N+&g ̅urZ.mzF&zizMU6Q8_ayl9bhKF^ۊ5`8[X"e= eMP.OBq;h jTgḌ3* Q.l?E^xd::Yt1=<`U^!y]@%[@gaxyeV|UdVs'e]SɦO^`iI!%IJuQ.>rq1v@ !.g 31\\ =#zΒhS^:R_5̃QF\ yj*w9cCwMBtr? P@OGD;o.F9GԨ cadg6cB2'~%@׋ڱkkW8bSL W)[[?P.:Nrj 'Yxy4W4gJ7(;r+O3Ϟ'-`+&F,;C\Fڕ0;5l_|lgmo]&N e/',@N٫e #YyWӰ#8bĩO3fqJAS%I3΅繜~yh8hilw$5XePw]Ӯ%pv?. PɕL̕%HiU*,R)Iq:NJe՛0sI4zZ~x#Êu :yV,I,Yc!NK3K֖$cvj*ԩϳ8qIM>v,lEB0&Y0tT0EpUd:YTMy@ou|0 x;Ycc15X3"ÙTTZ5YÌJ;̼B+ch}3cWyO]QɴZ$eu+98`tPOlcvƈ(G< %kdURrh_Zj,8Jb1}QCdJBP /ofl:$SJ3jlj7Jê|ZEl9֮/=tkF}SEpåްs9RKyJa;fA9,]d˓p[wJю}x@-E#:0'Z0sG2S _Jac.ضU %Ч}4}R/s92bH.%|WH%>*fR'__ ~(P LRB43uheŢZ9u6pޤ~bi߃i=  -^jFc:8[E8"sTHPXp+sPn4w)eQ ֧߃\OKu|:Ck6r0 ~YkQbf0@*?_= 5/6^PR#q*=&@HmXnxJ1w 7 w$%cN XZ= d`eUp}fn=YQAaj!mɤrM$䟒Y ̢O<ݕއU/LjXW΀ p,.?O_j|-_su-k-7zdToc g o۽f@:u){̠ ,Q2$;Qu|d X6wT1rji6NeafO$:΂ 2h;3qΜ!1jZ˖+aqo"ԁC6comqx.p%IX s/c$7d*~8r[t'HtrLE 0r, { 7 $ۃ20#96 /.pG9Siimln`S3K$ Feӌ+K4Ɯ#En4E_+rcÍ 0)ڧ4.5>&` x~)_JƆ rg3c3a)qVk21[LL1ŴӑxiWۤeՠUo_%M1 tkofvlo77Yʡm纾*-0d'lm% O~uOI'6qa5kL3ЅRN63#N}5_Vv^{p)òQHGN/)!RkڙO0XMZ~Ts"NY <AX WNh<_n#QcZOB+.uO:ֿk &y5R:drȡ R㒂̎cO᪎̲NmQW&lJi-z)zؙgi^k!TY. S 7R/^v90+0BYn<Vx Ya`q&$Gy85`8Y[0Sg&MϋKB 1{^w_`sOX.b9JR&}NW;,Yg9 £pt ^(KSt?w0@4]&r7;Y 1ɘq0Wc<gxS>_J0zqQDtgtKؗH.XcwH`0RD.%TP b01wo|Ɨo|Ɨ]qw!΀93)ĩz-ͽ$y3OcZ1[X\=, ^N\?u`.7niji1L!~墲 }[|[bLۨ!d ]j u̧h"s j eGݓ6r,W l`{"Cx~H>A0==#Q":THw kqHQUUuW{6/4/:\_wJNJr6znO WH$Nlp iZ\>> POn褰%>/rȭU*oEJ@AXQ@V\̽/χDiDA#W7hx;BBBBK\ď"EJnD[į>Een:~(e27.0/V=kWsv5g]?o'p T+|>Y*)%,1LaMi|w+0BQ`Hg1˼G#F%S}YJsaĥXIIuƭɩ׸%+s}4RSRh2 YQN) ^Z<8rtJKn\7S ȀVu. )R0)C𾻷"v73' e%m)[Q߷FrU=4Ui4꒝+p5ݨlIՉK8,Acm+d3Z;- q I3ϲhm6zʢszaㄷ2_ڄ5M2ɲێ-Z{{vުiJ!ʧk_r Fvvb&n9nuL>[0\"N3J `)pABጆ6X8xtoֳ~z[FxQG5(/j - vp5y7}̆q C5'W MN_ps{e&tH%WZ=ŸH⥓2A>[QMJL+S ;V(2MIL\&[KS =uWd  $TI#QWC̊z8TRaǛaU'Lj ^=| Ca@}[DH{ҘKzMn]W侨P_mG8AN&a~GXY˰],$^7PrIc#;(NeƳ44W2-zHYS"WVJUWgւCI=f[ҴơXElܰ3z|ފ:N܈0Nw82ni.}%`̓kN?]U+SL,DH-T!-)C8V1Rѭg.Ywweqe^hT$An Xbg5J@G8#;N:AppB%Ч,Z+Ds4-)8熌;\Ǖ\^j&ӰГ4WO jlb~B4_#s`I @PLčV*GR@HoY1)|Wq)eKr|he=-oJŝEr%fżgڳ)!PvjOXIQ&{RfXQq m4\ 1f(,}3ʪ3eiF9L?CNZ瀇hV"M3c=[5thjvNTI( XؓDM*Kc&yNIIOnޫ{7{IirՋw c~04whlSmY ڧ>{qHCrZ W-Z”$C>";6@?Y$Rp:|]W='bӲיǼIP$n?Sh:`jk2,ɀ=/Cvu~X Ė28Wڜ^μRsJH\GejeO`\raMK0q0B[&%('vbs)$Rb\,Cݸv2'QsC$'OA2(Lϻ i("zqpq#YF $}icƠ*}jԂKLb4›=SXp;iH w sGR`]N$#ǹ o2EwX$J^ AcI,G/og|4n> PNf B ("?—h`űyjJ7Kh)E|]W@z2d}BbzQZ[9,Fb_c^7"~9˺Gֆ]zo|UrpH;ru%ikG$H痂 92>20O 묿j vglKyn`f;;$cL|.NQɕbLi:b{gd}+-5¶-dU>LFj\%!`*V890$Vhkn8aF'b#؈,H1odhK87z,K# GiO+Z/C`OWt R`9“T"ZY :k<z e5`A~eNF:(!i/X9'A"3;4SV?ڸiqw$TAրb@OYR#w ~}dOD֒vu}۷:db ݃lͳ`l DɅ"^HE^2Yzv̳+pYgI Y_'/ӷF'ؼϷ@A cA,3Te5Z˥8nx% =O&"zCbƑi^La H-`Ž@h/`7Ơc4$Zͯ8/SfmMOIy,X/7<1tƢg5*TmV tU>HI-cRoI%bM$![%^jjzj;P,4z;lt8B钔F"L1F3Nzsg~Ur')#zcteD.MYE4EF`qƭ\4H iYC;wEVs#6ܵY J8#,ދh犞,E"Svg^n |ˮqÀJԮb-xiH)ѳ_ku٬IkQB*ַUkw>'RŸHn`:a9yK3fJiZRK"jİۏdg Hl$Q q,'ޯbN@|J}tsŽ;V݌Qb*cAl9>~ŹwF j k9@F kE" 93O]^Zm&drq͵oHʭ4^[nu> T6/Bƫd!v;ǒHЉwtK`PL`[}2G>d*@[!Y|hLXLXC7s$@m w;OjӹR;t y.o"A٭sR \;Ky^ńRĴ˲–qeaoq#6&1na킮 '7瞶(~$dNb8s\mP; uƙ&1"-}.r]Yo#Ir+igDz0 ;FF[}#)Qʪ̃DȨ; |(>h RB+(>% ɬ_R :mF*H*7icHPm;BO?zx$dZKnhTq`j;BRaAŃ;&*P+JW]14ehhڊ3ْ:$B6*9P쵛\}:$|][ĢT+<ͅC|;yܲpZq̷>E`[sɴ㥀Ϳ7"TC͎k,ZIv/G -$ȼz=zBoE cL1 }o5{qtH޻yGn;6n4Z?};ٯ:y@Y'o"}t7m`OG{N;73DJ*z9ʷ`Ubu&s2j )A(O|G!fZc8MxmJIHu,i .E\ʾ@A+@DC%abS RvRGNl/jU(FwxUcVLta` zO5zvgg~m6\\/ʋオx}Mw& &-[DA-E`(*y*ϴ&Hd{B:yH2ϑ͈.T]D\-| jyv51쒋V'_vkd{1uHԻ!$-uA5?;wk+8(#j'~~QLz4]µ'{:z?9mfo}ER>)N=o?9;Z|*Uka,1G<XAQTr,6I:, wuSsQj΍,#)O~d6%{Y5Daӂ&ȁ>PR#$CKm= !22p Um\.~ZQ)pl>.eMudS_{FZnSgM}덵Z:-D%o80`a/2 uM_+Wlz2dKMXRHDBjː8(,*SJLdYld\..!DIUF$p|˳EH$ U|I}x:-0-Kx0mn=68c7+ͮ4L*j5zU&vIXaߒjƪrYMcY]0Md,x5(UZ۳9옡 /O2^<n.wEljq7TE(8B#`*I?OҀICvm(2mx^ uj u L[o)ME(Zqh3}wre7yS=RAn  &{/4d]Uq1@B6P"&0.XJd65qpqo`fٙa[7^H^lz!BLQVz#F,F7x-z3M"'EV"lUD^U,S5Xi*ϝ "`Y+ق@TM4$?Mn|ES KXcX?uH8&{<ՠi^WqoaVJbIF3e /ZGI'1&i)$5`%h'";?xl`9.l3 z",EZb/0`J*3MNY@@?b6U|7{tjvt=j2'jvKO"$-JvkmK$"L '1B|J TB(tėm Kzz+yqOnWĢbANˑnGS "O $f`8|ʳV9+iZHg(ώ-EՖVEP:qSXFm4 $ J~ϗ׏H݅5ۋzW"%joތ-<6O굶skn/߶~]yv2xMZo!T:eAR z(Jzvv?,.˾yUC09haN>iK6d ,}Ɗ/qJ|Jd]'y GH7;F8u=j$eҾ6,'g7i*MLsRA|V9'&Dȶ:Q]s#}dtVf 易N%B8s2*Ad[^]Qq=]Qދ߽\eRLwC5(;[+cK*Yk|Sg܀*0RoU.UGnV횽wV+EHhAQhuK&U.Y'3YRzYK|`rF`(& A_vLlt羃dv ܍b`O/ǫO4H~l[bCֳ-]o?_3Zh;{h#4k6,?R1H-_Nw>Ӡ*l;>6,6%L3e1cQ(kkk)l(ėfg#|M9jZ[ SN d=*E:ҳ3rT٣ڌt`y#Q|sP%_0Պp)|Zn gJNBI7Gk9Z]=&Yo2zZLҟ.VG3VߏZne\i~9 )?>Si 7o$)0߬x,m|j_rb|[%O㫯T~u$u9yznyݞ=UQ9X 4]6T6YA.E5EK9E.ݺuĚ-4kMQa1_2JF41 >9#9h_nV؋ zYF8w=jޛVS>&xy%1}u%|LuFWɌ}睇\FG^5N]I&~|#u=-sHJ;BW_W/w8,ͳ^,.ig-~^ؾv\d޼pWy7KdiQ{5{m>I-GګrQ.RV*j2g>C:NŔz,vuЈF;Ms:Ӌɀ1aORFNW#ØLưW^/,丧~?S4Y3 !>խrk}+&XF7\^-;NrLb@F)?3@[hI =@ZY37#_}q҇ Rvnux?+Vd@9/r)6wlNZHvunRI?fՕmL1i{fBC@RO*]zބgc 6^D?M`ȽM-Z>R29VLD}Xo\j:<RX}fefaG9ծC#4x*er'4j W|xCv^K:>ȁ*nI?><͐ZH<~֝uHɬZFQt,.}Pa;Wh]:He^3IC鶐k)y*|!s͈$[65mrQT:"Aky\flX6d>Z{~=w]z^k9z%Tqo5u8Z>F<]9#n95VϾҽH\}{ӅWt1y`ɎÆ#8iw`=*u.,O~~#9+~Okߧƣ~duF y<#e˃J^l겖wZ ظ?l-0;qo,t6mk'r_ZpdqdQϸ:>}ܡփ$e @W.-]݋{&lw^izy]6:.;0UGosnlVO{ jQ?/{Cs>A]mGV_-/O-hkyMyva?5*;R9#p件E1+u^xܙ` C7r FƬ cpUb[{ ՂbI <5w m[0[Z`w4"!ps Kak|15jq^C2bVXA $cd72|(+^c;9A9=_[5TN7iζxֻ-kjhJd~u/ߨř)œ̬: 0oV5eCϨ۟Y/&k<0):MZ S3oqȜaM tu: +-zߖ^-Ьs6tjr\![O*ޙxsg9Tϳ=q}[a&R9b4o!vsd.r YBL4rQK2 ١"yA#\b&U-#}72 ;dwftZ*=h@ڱ l?_ 'd\[: VuI~:9W[jrNUmsY Ee_S[(2NBԂp|`rI2`pZ'ܽ>PX\lYڵ3aI0䬼dֵL G㍧i 92BREX!wgMeK$$<0i8cM7ׂ 9n{-}4O[=F?QԻ>j0~=CS0&A!8 SNP 습e[9Uk%*r 2XʌN1*FC k=iPTcfe )?N7Se3M<]H XnvO\3Mv^Ѹ7%|j_z*CSោ$!ִtϠ̸kw;.W7z&zjێXM߾O>_w/0F_B޹;ǡ1}-UO}+W~ΏI/߯c9@zo˛pE2xTޛx]2ʛA~i6 bZ.Z?b|1w;ZN`Qo轫v`ZskW >*#g/1ęZ|AZNyk)pNl8V\}P hi5 !vcҏ?``yhSK.0n1>h4.]΂wxo'o|lk3k?*T;nbi#Qۚ&aS0oʫbc-ܨ ٕZuEUCHZPP m6[El>S(p6T[z4#n]ڦ\.WDՄsT'=ZcgOvWdA{yqF9וi/p Va,ƦƉmD؛)@J)`Ŝ^S mŔH$VI1*V T\ƣjsOg*Q|9ϝOwn㟥J PtH j9?=_}R,;m[nN>`wpa~'$˅ 9,TJMBiIb]`C~[ј{qz}yKVkS*e(m=O:q>NR#^/N| 1ƚJR|3:oCKںH9$JJ/3dgNL|sO t5'bDDu11"ȸƠE3th*WrN^y#Uv8FY-˘)䫧-LR샇0kB";'^]Lc 0Q.HYtDFC2dR$KzZ$ӹ"~ּ6)DbXޥD D@v$OYY.mpCq.f2Yv!I8]g$`oFHUf}p(,N=N;D 8}1xMYMrNE(&1e"AႾRFcx\ ς Kd$][RTr\MN( %\c؁<^ԧ& ח:Ye4^_.h1O(hLk:AHv3|TKeI"4NI\q.ÐkH٧ܣ1ÅM`2Epk;ծ:y\ӗFF3- O=(pgPـ)]p> UAE4ZlU:QjV3Ke,W{q;x;$O&'חU,.xŸ~; 3jRܒVa"j'8*Y,Ul]: mE\|%^Qf%_;M@yH~NT]?dI5t Iĉ-I}('RkʡJthJp!]]$jEDd&t%xo|FB(+,o"(RDs}:_V,pYd+NƸ%DVY5PC$,iHa`8V wISDkhI\XdrN#rчzkrlK9MjǨ: mX/gEDZ`$Kߺl%,rʼnY%a,!xr 8K>Vt=Ug/GKt*-҇/eyߍڶ-hk\ubF?,k;˶$+Ӣ.06' nq& rUA]EI }iQ wZE}:K",U9ut۹BNVW3S *\'b~C"aj򴪟veAb$_Z?L-MȣȁyVp6vRB')ŀ'ЊA tEo(-%$ ;v+f[I1;ם#$1 2&9 "$R ڷ dSQN蒝أ.$ ILe'+.[2\E#G{oDDg0bCؼ-G΋}7=[pg4ܺuCtOFNY ;̆x#Oޕ5m$T݇"aVLYQ*H =II8-Vt7A ˪Ba4:ā2\FPUI(|\>yap%X0fHx?* 1*c&Wq3-+[/CsaJdJ <ڊӳ|4`r ꥈx<ސVrhMDKRFR*ŮCQ'V5c*|'`N8TEHj(<V T9 X9txJxywX&i`WajA/U`m4u>BZ,,My-d( f"̺xw5ʁOޜXYe.o=I6&R}EL<('cxd,;(+Bz eO2)8wx wZKF@4 ~z$^)=Ɍ?Η,ǿ'7SaE}k، /D O0T(KXK_@y{r&VuV`aufIJˬΎfEQs%YҰ{/6C}FitR\djp96 _I hy ? ||Qd$q䌴`qi>1pg4w-(ј0K!D[ߧ5RI%1NF:P.iZ Ys95:4`<9.ԀS\j꒽M5CfO5ɟ|`? I,f `ltLR+\q1XHZuP Q9j4UƷ JO)M-~'ʳRm~uwL]{t)1P%f0\|o\L`#7'"^ }B c8\|=70N{Jb`2'aE$'̹`޸Ky ̥'bϰd?no/ ln:ҀIU] @#۱VnuAV%xb=Q07e6( ] F_kAY5(l9dP:kgq")y@9s6N̺ 0ϣMJ=0DY5$qwcF;.v1/aaT\J]P$ЪXfm.ͮZ-ޣD2 }m0`E%YгAG*}M_6//xpS>@1&8m€2fR^n):ZF$nNlMe,, |,ٯ=pI.q2jP:-ka8qFXPARDKX=H, V O){wrRV/]$,o/{i}Ъxh`V꡵ꡕ`Εa2FJ2G`9Pd*PV4wRgZSy^-8|< ljD :*::!h%†%jp8(=hIhZJrݱ+ꌗ8$gvsbF$6kJ %KsTop}Ngi:S7v2_o{_)r2]b >ʚ7p3( ˫ `$/ q#GhNV[#x[xt ~<'0 7x!X40 ?ib|(JjrndޜJ9a瘺Dл*Ĕnǀ~XI~טC؋=yz9&$FWWAL1 & `ABLܗMٴ$\ 9`LŎ2N`=x Uc'ND@RLӊE XZq i7+L5Tcb*FPGzBDDJHZl$Ӝx_\3 fjTph;T-^튵%_Uw?ga)2^Vg5U[K4-np&]& nLq#h^5uZ&c8# *;v* :"uT_7]WLv]]qVIj0\8Ht5p kR)F=<cB)1Hc oHtUB^>BV;n83v{*)~onh m!U,aQ:ӡX8;YDĖ$BХ+gw!Y|й^J"5N/+Q: thl8'{Kx6 β)u;**w2p W,uzs3vǥѳBj~/ٕo=.Y g>O! dwoa(Oy2\|o$KٴoC MuIp҇ c3y?^ |Cm/K_M  >fߌOѯ qѨŷ'XneynxIӘ?.$Gs㭑-wbfU.:yvL+,'\2'0zSڞjO 2ւC1| ul0 ySӐ!0\ܹ^4[3; QE+u84'bV@f:d/azw *P,{OSo3+%3.xK!2`o{>p?# m6;!B 5Y}$'esmݹiINcoڧt;Dexhтqi}iDZDIxB-N0y"YLL%.:C蒤@][{$9zPǾӹtӹt[Tܢ1oY:j Va~|d;PSAVQ6JOnj~]t3 ­T2]ֳGUYIVSՈeTs@S5:X1vqbOLX!(q[0LŘڽ!<M.+mUOYAfovo&yS\:̩ܛ-7-r>h]v7{T+zm][UX#s bߕT5 6,M= P,QB;l y_0|Y&q%{)bMY`QF{ Æ&“|_-IFtI({?(ƒܜD;-:#E6Fp࿽~ M3 Ԛ %k~\!gR7Ts{Y`|/[_Pmoc^Y7Bn#?4{f<8D{" xoJӀakv gr܈1E)s}^2co7y3h'CoW ƿ ț.HOi KfQokMnb%H-B"[!Lth>sQ V.N6բliȌxSba';V j 墬g-bC&**}fz:trXKtn6T^bE_jЫG#{|1zOxa]%'җ-sb&LĘc;͕ܗy~ZrT&)OiZ~//CݾjAjƬ&4OL*Onq"O%R.^I0NB&_>EHDzavU-*%#_%I~E)jUKUMEE3_4 ƪPWm,q K%k@iK2(ls.::Cf9ogϫNpE[?_Yk ,;no]vI@vN,|P (B\ -)'O>%#| vM^ q´ƘaEcP(0v&8!RiK#oHIL:( Op}=qzj=]KT٥ZܤkKF–,J1k4ږ}89  =ڊ@(qy7'J-^|ѻ(, "(T^拕ƽ[ ۽_<|R!js~JsFA>DbP^0l~"|FE, MsS0#A#XFzǥX~ݛ^dS0+!K큿3M7 [,5H뚔21~8yRכ9M-a;1qdƉ[enn5,f1mealM^xuI=vX,PV\DZa^x`U +dݥRTk 9bmݙ9F95 8d%Xp(2D1 '2 LB}0cD `V*GN3 ؂@abX͠!SDx!XLdu,IE 4 #ĕb(B iZa1?lɰepOg;*oHk+|+WSAP4A]kH 6.ҿ[RX{Q*}FPފgD&Tb*xQFb$S%rBT2vFiK\r95#gͪ_4ڪ=WA`FF5/guWU}nMp5}"EWoYsdl5a;[[4r`hмih'nX-Y-u񡑢G!ъ![!2?(Kk%IM%W|uR,&Dר4+Qj:\%MpxEI3E.KBaY]ٽHZ~CdXd *!rOq9ϝZ4?$Wt iï; [A!M1$Wvul3I3iQO$: OXW@)p;XQgS,YhUV>&(YSBS OHՄ`*"(g;8bX 9(]sqi5ͲB?̭HoO3%x!8G t@*h@26?ՠBy0xʀgF7iPv5CBQWu&YJVA*'beAm-zGFcw8'\~"!,n2Oer WǰaX#iC'pF"dtWu"B*eR3z׬E*{gE\xlRho;WªM>0/u3 ~T>)qЬEs`v9峿B٢OhQ TN=X|yPxutP;껞'!УN w FH>_#u?plh=C~NDI] o^0M⪕ҨeQoUuԺµUUDݐ_(m<'3I!dVd0첰U;@7ŀAݧfvܡ*dwU9Bc;|g/!XM`S?ğ;c'fƂ=-A e^aoT,@h8D'oFc\y]5 fBJXlchJkr֢$`SJ {eJ lLb 5 l66JaHb&ZCnWj *?PpW*6[&praF/q%"8X.1Da(Db- .D\Uzj6* ~M'NQ$ {gx .6*˙;{c j]3&I𾂎FIhբA>NyJtMOf] HU0b88ֆv1rW?_`Tp)e JGWkh9G'qĤ7c!EĖ=#tǷ5,2 cW~& ~[]qq0W4,B,l" <$q8As&$-2&9V(%I\-ܥ#ΞOJ&(Z|&>͍aDjv>p-(`b︊a\xUAKQ:ΩrFJ ӈqmFmȚϷI1urEơV~qŔE7߾]*;#ׯ R:26^@ߏ."vx>7u~5t ǻ`߄c![|߈:\ ^f`.;2@ϓ"\AQX9!<+k)dAaRrU- ܝϤh sDIsղ%;,šlBKb `o +5{+ wò9IHOwxū? 1px=ᙳ.(_ !6dz!VAW^*XY;oCmm9L.eZH~ecyÛnX0h~e?@*.2 L| !}غZkOwFL(2{}oؽ=tϏv#;4%_Q֥B)O5zjy9xҧT1Anrd5%Xj +O~jnL3` [] +]e}YYA >H&.Eu`|u0/jFJjwG]X~uNndk4lhˏ=pCKTt1%i\'mp5T!e+%w-CPGoF DKͣ<^oIu54VOUUQ*dL]̸'Am:JT{WWj蓬AFB.g%TO\ah[ R<03; pQV#c^躽KvqMٵʩG׬Mx{YعDQrFg6o -H8q\;h;I 弾h~v4DD#Imv;nm]e}'V w]7]THqtX}\Np1LX`mt>cu(/>&Ԓ gK?x;WX?{׆ >ޏ ju4}qZ6&Xŋu-Hm_ e5:q0/n!~w%jZ>cN V(ِܾXh Kq ?{N+Vojwڂ)颒j*Pn 4ղg0.p kj'~q.Q'w/ A(әO`ʳW"`m`m`m`mu`E"&Xǜ""#9c9 wV)0V2cBHOa>7i%e'#n)G݁.;7i߉C <ք B23,3@MPv*E7jʺzMy>&;0Xʱ]]>IoЏ{н0Rٜ|5[a2] 42Lߺ򻙩JeK`B?)d`e9+j$0M,cS H#-[hh O -:5^pG6 d:TD*l4ldV`,;,%3%SlUՓBK:XXXaPⅲT%t7*Ad)KLQ15hQ!h!F'zj]H{i>O;]8ii@$B41bM5[Vr11cctbUyS֚I 5d㔧-woC:~M`E搒ڞZ7ϼǛ,W$~M,_`I4 Bc~m<8nF,w#BFt׼f#"}쫼 S]A+6' רTF9!<+LlgpM2]dP d `ѳzj%_vKTJB0ih*Э PUs^qX$[!<^&jSf 1xH />6UY TJMo%Q1V S ;)%X:a|,}X#Ts>jMpD̓ZK*΂{"XbN(V/*֪Tk0 䁥1<lNG=6VDă;Y(d(XJHƨz%.Qe8p%!ƚ m#gAޯ`>KĖ+mlkMXsy|GR9Gӄ+mNw ^&r#9p8ҒpAr [csR|0yh4>sk?m2U\z2 ]ԕ2ƽT9E68l8sx=胮 OwK0 ^mr2y!42[1n4n؟u'=;G j`4AV"q ʣ2%ИSŅ]c=ƒ~얅7نiG_v8B0]Fe+|]ms8r+Ti4peew*\u[)=>Y3(٢^hh[0c [ -8! B9H̆8u'v@*RUW,e~7}#iRfZ@T@n-u LA\A=?r׌>nX'_3jUF7q]~Cnj+ֳ>95ش.y"t3^fɺ{}ߗo5ܕ] P9ʢ}ו<=8 wrg 0r `r prݛr,ir,I7{\7rMBUȨ% NSi]y)FH5>vӋ&V{(wאQ2+0^aLgW1>J}'38O5fpvu+/gk" [OSwykSD>1 }\o{m_=c`HYSEFH_0`кJ8oTmnP:߹W|Q(×+JdZԿ׵O&sG{Ea7Ó+09hY#"242nd$,/sڜb>ޛ: s2/p ݽ?NӶwm:ApSIBQUZ~Oj߿;YiǃtZaL:8?VQIZ7c:9nJ(;e;%:+30-`—Fr-<46Agh5쮾[3 EuomKCXGw.>}LۘSZ):a'4v~ٯڸF<5k^5qM6*Sgzj,i9s,*+u)o=ti6s'o${|߮M$vkhE=2%% f]nI"9Ws>1Mi> m10W:4(*BzYÞN@"/ ֞8 yOZIU_L!8g^֝=Ҫ F xYutJ*诖EmLDyټNeN͘6e1,0|ߕ/:pGRr1"GbVV Ó3A ,zJwäqJ4.S\SL )Y&LJ) U k< Cpp$5jZ ju.Jr6=zBHIG9Y84o P$W `@JdxR /{4abmV(p cݚyD:^ I<0wϣxL$ XP5K-# פ΄%T/5\k4Y)p驆 }3TY'7<,E"z3_}ߖM4 +ӹ٠Ar I zX/2#]ܔ+ɧ|jsSI9*ּaV ceQERCq|\^AQ]m>_uB03j;&@E]ZSnLv WE_%r %3o_OŞHt<SB"\岰xʙ$˴xa"]7DS PT3Nb7 ZT3n^8q"s 5Ӛ[w^9dL[1ۜDҖis Jܗ$i02Z#wSds+!rxKkL@(8 Jsfl@P)nF#nmkdC*'^^ZD"bAM*:F+oC1KPd. sD:W 0@ B L @]O^&ˋӱxǞV<i^l*mu踤y+fW':L+~^3?}6~ hf=z~J]3r>zܼe??=Y"lƂɈ'{pR-J5םW|kjgB#7=!̚۩*MjS- nW?UUmkƖ +"%"Y<ތN*7bTJ8&;S_\78 ~D jS b\5)":4+8} QZJ םuFQ{C +/CODeޅ'-ԭ8iu55˕]M04*i1Gxr5'u&V=I4QqRTZr;I{ @\~xsx:ũnW(+ޅE.H8nmA'L}zkGT>nr\7 77Nah]obno^>B1~}2{W쒢i 1MhjƅQ#) j%ԒF~nYЃQϾo{nܫg֣,4LR7\2\ TrV l业/a],(juuΈaѼ#G#|[ N=wH0ZHMxVq87Qt;Av}w)H>{Pj(i*K_r)TK?NwjAH]V>,';SGDZ m#Ϣl"ek{4ɏ?`f/eT㙺pH JuRx%ȀZ*ˈsڊ}VR肤큾*+Va~0Odrzs4Ooyyڌ\ BrECpxq*Rlꫮ'(_@$Zj|Xrgb}SܬDQ;^6QBja,VY!x򽾭%oplُ4Рǒ2xoLQyslȆ"چsXoJ)#lF![gN۸1`Rt30_Cگ@] D~)J(y[oE.\! gVYLdPHYF( ɕWsVXg3%+j1cݶ/lB1%Ƚ@(?lBVSy~tDQ3+!xbSt͞ۍ)BKˌbܐ p fZCaHꜘ B `SF:ڞͻ$s6~x,ێ6JLU־0N䋥Dwom @D/O1鏥͵fbb314SSMŗEL8Ȭ51.(Qo‘QcTDd3U \t۹ %Z2u"hs ơJX 2Ry3s. + _LTR !3TF:TFc$2͌gâq>rjbZ(R#sH:nJOy P)yP)(F8Jx=4 l ݕ=LuLAmrɠ:8Q"!Z";h/?m&}G5}5?$|wޕ ̈́7xͿUW$lU0XQk|$`)Ӛ//gB Qn{\6e+43&{~4T;@!W1KI8B= 2!0/d5›h^>n5P Y'a6_~;FZޝ9Ź>`D !9X38"ݒm 7F©B+gr4څf ;'"o&LM*Ќ-[M8]i'X=8ǛI!Fʸ{:ϡ>$exSp4sdo>`|i薥"罔h_h%KC։Sٯr+@u߲IcV=Jl,%6fBs]i*])O8_a"Sg> 8\͋+L7z8.OKKމ#TL7&Esrz)Lf;ߔ9I]P9u^s!i}RD}>w^|? VH4vjY%"ϦҪDLi;60kaZ@qNDёMH"v|%Ȑf &(E5&'Qa8`Z-[WK7I}<oR'P~\Fp}Ф@ҹ&f7Rv?"V{Qd\`Q:fVd\i_L„;B36 A@;]Knlvm0cx31 ƕD{pV2,9"bsɳ]$O :`Ss `khZ S#JtQF#Ax`ֺ5J)nE)ň'bgAJFjg x&Lg9Q0!&pg*`DYm=״ژf}Pj4OchfMFd]؁pLa%fDd 󈔱<׈HK֩JSfJ2 Ot?49DB}3gf"9 ރ0{;D(ޕ57r#3E>mk&Yw7OtD7)$@":XR#_" q 7[&#>>:,J\zi>Jh@4\*qS^d$S.e_ w`Wm!ch!p/G5y r1ȅ_>jD%Q]z|9VYC5dM҅K,&ɸeVeǗAH8=-c˼?X"]ŲPMYcIϦՃ]̜$O.edJ`vc$Qڭ,BD;h Zܾ|ڭ DvBB"ZF蹶[)mV!SE%ƴ[@VEl¾ُlS%scNb:a-r)m#+~V.lfοlN1^,gYN'↊EKPtjPUy9Ř@! p pՍ~2*WQ߁C)C^#R2ΨרT"@.W\68/:qo.PK)N/ mٱa!O[; =X~ڴYC'3;OyG6{OQh,1l쿪&|u wUಯ̪Lv&b1 .M[(Pz =m_qj(QsL]hDļޕ` b,;F4#+"*ZkIJ|': M փ,u-/ |A&یjX i')ΕO &h'Đ|o (FCCI߆J9i͈uxjcbB*đQ%]m5*IQ].PhZ̍{_W0K*ny -s8 V[h &$)}8rwxޟN9,柇ө=9뺹2~}<$ע?'1|ٚwK?W?/ |x7b\QE0UlT7Wԋiġ54eD!u# urnADxby!b=/ɱ!5k"Z3Ka/ e' \.}y" LvEo<:d9?Wqd+ d',_VϜᓓ؟'EAcߌL|%4oƀ :Ia(DXDw*P)e䦾$$(:u)moK $!9OK}?`vq|;~G5X#=s`.ir;~ *j__p'M,kRCQ`Xi3*jqd́ۥ8*y<bܽ@p͞DcLB'1rqy~۠PxYfbA vep$a]:sEp BgT[jFUwfUU %na)F骂")808p`uYݾAAloxHanVh`c@XNlx .D"9ōebdfoxǼ Z H4T"!&KlcC `FP@ /XAp{_oO\RH\Z;Uħ{_ap߳{B ug0)ՆDFدr'wpW@B\nrDބ+&bhMʗX<H=P%a3‡'$WSy6.na1%J1rcy-sDõZVf[.>Xa2G.CA8$nΖm_8lۯ?[ve՘Q&ns\&qQIu D #dw MZWEe9UR1A2/ӜR'v(ᾇ1pR1F//oN4 |Ք* `I4 -YbQ7y"Hr#!%(@%B*̨ud 1wf2Gux1(P BJQ}K5vRzRu)v>F"gR+шlR CqT$!X)[V]E ¯V̭cTRh:5-@[ZA H3#bdPj%jvD2Ąf /Q,a,Ȅ&NY8V_b}' ?u?tP\UF/,cЭ#cɫՏd j"8y.*l;T}4MN]+"RXF*q0&0hcgyq"Z<.w vb(qxy~,_'*%<6:?F>UX%ռoX9j7 gkc,2VQLi5% f"8|0M𛰝eMŇoRSEPU!1*lࠪQOFEAUEVM#F@#41b9M{]?|q>IQ`3@p^{Q\8|W7$=IΝu56R^\AM%wcXLr1.a^Ajש~eN>ꖫzjuþZX)7^>8 9b;CPLE1gy­ZQc42D1V'4I41DpQoMѕ*BK-!\`qw.-daR s i;T|NJQJ1 R },Rz]yR'($PRJuiFD]uK)aRJ 8X9IYLG̚G_iiߥΪl_YBc)q7 dʦQ̆Tș+̕89" U7vCG$ՖZRꊪW _i'Gc6B @"; XXb.65{+[|?Z7>O=]mr:~n_14o~!_ԀC0}uK!dd]= $ pō:qA,f׭ Pjo֭ P]C.P^!#XIM9V3εلcQl E("-`+m߃FQ˵E&wI졚LEY@|bVA;`Ua!CYG)-TXCF G VsHJe%41VkJHt).|%^( ͪ!O̬c?~{ƾUcPַ⛉!=4-O&lP І[ǩߥI(iR)$i㩃%rb-mc9Bt )ğ8e62fβuQ1}-%38οԴ&$nKk8 ɟRC!4:0KT!L})C*g\Ag8^oGo!Z`@ ټ\ $VEMp ;-} ]1D8"V(D-dpxwœ]ǵm蠑7G;vGapɷ]\H~y@&3u:-"rBm9+'_! 5hb,> a'U2?02#7|i;[̫dW}NYsM[B :S:ΥyGnɞp+dlR*e*gW\6-*C(: 9m(ʨʏ / g cEP9ׇxyC xI>uOۼ蓉$w>6y]Q՝CrXt.MkIFYCW_ !-16vތ7O_ ܎ _ *p9m_/.@`N4Εk$םgT93$YKw3ºUyrx"a p5NR2r9Hb)7g"41UL e%;4 2b5ɡܹb,`r2,DPyI6BRH9[h )pLI(.iJ<1$}[@N=wr@bLLldKbem%<̰C0`4,I U."W>+.UO>QI~Z̔mɲZ>ً[BgLR'dJ" \~Yg)T,#>lӿcD7@,rl,yoN% Gþ{|n&\B6wwÈb 9D{#5_c=1 `1RNɒ?_ .hG׹Sw7l}苍'Q:YM8I&Ȁ" F1FIa c L~Sp[t`ϣE-zw_G-taNp A3#;=iw )畤 |sm&4kA&Ӭ30گ+08Yh1az@`/!' ! v8ľgq]]뮩E ީ5d4yH_}==6Y.ry-FV| l+Dc{w,Z3vNl;w9'qq>Z٣:Lz7Ize'K:'~}r}lǠO)1[3QJ("v ^9فx ?z#=udC6p{s T3hg] 6r)'њ('сx ܶIⵂHXgõن$̆d$gA&h,"J "HpB p#PF! .ՁB5ʦw5nMvݛQⵃwkzS|'c\VBA.ۗ}G@b}HX=lI[Z M_Qn83=zMQ&f p8i&{][Fy6zW$;p\;bW?=='Ή%?6ri޵q;E^ K!\%nQ\4)VTINo]IծVRpg8>YuiFݺpbp dfl &42uϯ}8> >1Ѹ;.[yUzN߽ ;;/zAzk;>wY` w,č%uwM؜VWcL ;{V0q~ڛ~{?'3{m@#S0Qf76@蝱?.@uQ?];o9pNdt"a87ǝvsl/$.am>2礎mGfI> ZWhA94wvūHw]bFq0}ET}XhaMCˁWٿC|=mVdU>e~5۴zҝOwVe+4n Yc`J'.m-P_^ ̋v+~>FON|._G0x3)_?nIO|VJI>A㼡0M Xg纇Y%`(ۂtdnXP#qBA+,">o(=k2oHÌd#E}nf۪cv"l!A}| 4=Xڇfo%{1ՁM$.G:ZzWe!S;~_ǖ<9#~GӸʣT֬MOYi~XÇtmRD+-ʲ[ر}U.5xZ [s6}c <-cZsq{Q*/H^Tɝ6 +ӥf,0ܐ?'Sb>2oȚ1 jjR w|;4Q\&Ngd.]Ӻݤ7UAl9{P>gSo.iVݻʒ~!8ٵ17֍z2WkI͍X.keWK\PhRfTuW$s'sthL: @((C.grcF\$/\X z՗+T v ;n+7+yF3a.3A|7l FÏw-$y3.K_}/ ^umgz+\܎$=Xy2k /6q zt?'ܙ][eniw1՞ ]=z/L7eLQHjyecZXSJYq$5;&A8ͥ6ֈP1-  !I#!4-%cd %}bMRB>)b4RD&R]6iL*^I;D$/ip"(;%Q/id}G HLSػqaKz/yܩeb1ye%U^LRBB`;uR!S /=%;n)eOJv9!Q}Lu')=J)R*;q a@CH`5FB( z.RFBTy(ƑƜ† Bs`)%4"`k$Jh0 `Jx#É0 D50cuVFؑFОɾK/$Bx%#')=j)OJv˃RQ-q$(I)u4K)RG\.tcRzzI)u>+\iz>b)ȯ>kgS>ݓxRM: Fj- @ b^(y\a"Jʹ k afh"cH2,=7AHX `op-q]vE+yz*[5Ox%'AgF9+nen;o^Ou,qb`74qn2$ iI K תD*Ĺ9W^" D!Ԇ> d9hMQrs#_wW/ KЋ% X_v) `p+<Ȯi٫R?-\dlT Gs?EZI?ǁ)h8R%L(֘Cܤc1`{K7Ņ5gOzI [`F4rl|2ox?cG1cD%Є6I Cbx(S&' )8" SXu2m`J#IXq3ػlF,STIفHUi.e&bTrlĉfS;2wU?{ pwL#ڎ@1DڠC&X>gyX8уڥJ!Gpôcbr7^9e>$@<] ͤ ݻ#F<B"vSO%Nw: QvipY ҬHUnF@` IOFttYѹh=^\TVz,q{r۝b1jH:HX7Vs~^坠+\C*mVcoxc@>7W|ן[r^?[-WsWWޥW֍:JDTKVXe|X)վg-($վ|5I_z} ;!;Ni7N4 ʥ.*[S8g ^ѽnnՃ9JwR+*mO)Bl\~ɿܮqc7nLK"1[S <u \Z >{ -{\ :fإZ Ě͠=hЎ>G^ **]:ek=GFP׾<{F-z Teױ2 謻, %1}Z`^,/gU—ߘIn& y"ZA8- ݪGtQFϺLQn=[ELm[nAsUBY=j<}nE段[Iڭ y"LqЮQV^;;cZSog2E&zrv+<pȔR0K6z˘d$<PTX8! !"+W %xhhSY6qu] ґ&6!ROd h@ OUAS?}Q-"jAniDJ w}3EP#eZIG 9xF޵/Zgۋr!h>} +(,~36@Ga˞ rvv)]"j0e"{ys.n_6# $媸}8g,-&4!|ѹSSp`mgIV@ۚ@BD&֬Xioqnkam^"Oɧa z|TJ<5AFmjB5k4*ۋiϘ.;]o;KK 4"M| #1P-!P \ӆJ[rƍi@ 0Y8y"ndQQֵ{6ʝl٘"Fb@xsֳ3xqD[ʅvQ2 #%51$܇P@ų`d4}Ctl#/!#bH"]n \n1' gBqGZ _OϸB8&$wnOϐ0D'E?G P>.T+ ¦w8vS4C((7z- %Zi>#@D4Ap:dI1OM1OXRK1OӅPL+(cH([maYD@}GT omu_z$(*e3 K/Pp>6ʢyTY`!L7w7/?FnjWy:o]stZ;g%rgG <#q`(͘#|bA}n;v< ׎4P~ױ ;ZC ^Z7` )@*WIWYT–[bǢ<$%e%lň&H J..|((9wV(Z"8Rx" Tڗ w[Sz_d}zi$ŎLIX {4ĽO{40C9Y5v隡3Ch}l#WO.EJ#TxidEwo"\=K4qf/xZKk}˲ƛڗ4:H6HGl_> _t ě%;@8t&DO!ɰ|D ~a/5Wݥ4]uZpJ )*aZa "Rtd^Ε,50d3-5R:n>#Qy:t*hC$rhM/r|Dqg>#/~+Tx||=XgnY6qn6#Or11gx)wwn4_ZD;Fn1w8 >\mcXʒ Wj+* QhgU-7w^`GtB,*Cj ;RZpdBQzcp-\'%rݥfDFvDkM,rZj')+420BRaXPQU`ٓ#;}lII9gYT!ER64D^ |zxֈ645_<\ ^Ͽz3qHZd̯9D *̪7T|CHaanC$[콃_n߻#!2sح^=]zǞz}iͷy_WU+~VxC.{3_6`'Uc;/&0x}9Ah4xZ7gBqn%ɈiERs창X9lqVJq5 +=%e[j飷JJu^߂-$JJ?J9 TI-Z/VzV-+=k+kSR_[4.OՙxB JwR+._zV1Jy8JYX)8+VL/{sRy_Gr9?w+V7jr1jW)iJt_wZ˖@h}hpJ7G^F&w`뎾j[Aw7cZڶ\o<<\MM5 ^XVXR)>*a8JLY*ᓨ!` {kS+ |TJ=XdwγGC`H xů%.E.)AuWկ!_.B<XXhG-P `kj}OdPl<E$zՙ|0JMGZATԁ"3J5(W)+4>2; FN d m,!N@X<(*+wwE Qڏb24ZT7VsܐkoUO!Hqq#(ZjWptuh1j:Psi *Q)Ԯ,=Z A3WbۯX`!qhDŽpK# h%sI`v =\ %-*A%N-r.‚97kuU>W\(mUoV*ȕ~jpխ}/}SF)7qh2nqKF#-XɈeE-@) VPХ3' D^.# ]F6mcWg2qja ЎWXGk0~eKjJ~ W@٩b9(b:p]xɋuQ7ٲ^oG\jX%P牆`pX/"rK&E}yĚA՟{?1-uw&\_w^ԆR[SKD+(X ;|?!4hz~3WX-/VRoJVK!hg9EnʵԒgy޹A4rCFÞ2wa8=e$Re>Ȳ-rD)[ r[Aq#i{"A#ck*7RנdP]]JJxrfdIC!WXzGok>c0&]hOtsՠFkI> 0>!̔H>^{4ߞw trݦE[~HwB>sݷ):;ǝ:Ls"o)qII}Z2$)u2qVjbm,8+T-J1J1ࣖZ*8[+Uo,]X'> l}}U9v6fm͕@ So =WN)G<5D݈/G1JR_vQD%݇R%̒¨r3"vyr`FEbc` .!z!o/HmOo/"2oQ5Æ~\9kR4N# go;(uErRJKƔ#x*jyF=KroM&&5i򡞓ip{qZ7 &G^ڶߘ8͵kǭEF @j[bo:|(f& wZE=GQB{V \7uiJ4S2C 9Nbj2_cQ-BIf8H* EŤuEi Vy8Jk+%b"0RU((R&1ZI4$#ZY R*[MCr})t+.f m\^c uߪB;@_ΫjRȕ i{䜜0 ?el~せj!Ҵs֣&Z*_Z`ЪЪ_{I`$(~xxiqwz9(=cY/cd8vlf10.f8/U;VUNJuQVǟzINt)%КVmնSj^ٯ|Y,zt5#Y0UZq!E6xK}4X$$nEyP0PNX8jНoh`. XUY(tuUW|C${tYkz:{$h#WZ{M>Ax ?IEzfD35WԮ}n#+1wxYͼ lU/R Ζ :J鶚Y2Rgt[ K5y[)0JOI}ٖZJJR.|CL|)/RnX,TEZ(;Q*JnF-*VJG12&*`Ea(6 KT`XQOU/RK(E_-LIqYkeS%4躕O?fnv%H̖~}~t]I±|^gqKL@ UVkuMK5 5k.[mЀ!XDnq FJGz:<>5w۳׈z:j};(yƹ+cN/{ښuלk zy֏>(qe$*e_a+ӗ5LHA,5;k_ $%H  5~~Z"í?[b*ݴ0B;"oȻ t7dUq<\_Ypz:;n 0kY:Oj4 v4q[M ؀dSgj)S<}lpXFTF#|x>D(5DE1 b{J<,F/\iEó{-'t^\:"0TMf5& #k IJY6 R4Q/╁CAQwƕQw#i&2q0L_~rh[g&{tYjngs AԺc!`"S>}WlKץysRfIKaGRK+=s+eV6nROI L 9rI÷"YF&r1C.d61k/(v-WIvݱ*>U,ɪ.V3ٯ $ë,K|zkXܼ hCrSƩ2Nb1£ܫ;}_><,VF8XqqH?RdN pc@}yWuy6l|bMf+DRm˔)EOhRN+p7£QX.ҋB8V%@ep1JB8F[Ќ˙VjՓc_n9~Nң:`TMuμ,8ErZS,.hcBW/]g/nȓXe YN(M֦HvqRR0F׋_-ajo_6 {JXOn)e+mFtJv&3a">4̛ )az}:E0h?aƢp$4jLlZs'';`|ƩC0O@Q)Y{KQ4p=Sԫ!T|Y9pL鑪^n@8\h{.>c}쇚Ar-+Wf:QWNOrڠ<&]eʫt.9fUg$/)Y6.J)X240ɑEQF(`Qy.0&M_ ۈFhG XhfZШśD#NZE%ٜunRE4hkv~!-W,}(\#(Ykp~O)C+i@PlKxp| 2ˍI.ַ/of^6 6c$@eSTծTf\MoSKͪ&mz|nn-+儾j0\e/l@ϯw>u^Wn9Ϣzf()fD]+9l®b=x(%7rff*͌Ie- F2Iޑo." sa4~\x!Zg%xyb͜3Ŝx3GuF3mهڛ|6ѵ]^؇ise;W[1v3Ԟ]uƟ\Zi\F r4AHNT2v4( o65` PV7׆Ow2ƮUs&*ШnA{Ii'u;R#m+;ktCk8VnT1~@fK;uu ur)]Nx?[9Z5<ŧui՟YקwoLO@%2|s~q񰩦^%|CiKtx; oM]Cx[W>J^N&(VsWJAlQKPj_uRJ׈Jq*ΫsuGcQ0bŤT} K-u䨕f*eaJrEĴ;1.cUy}[)ļF(jԲzdcdΕ?Rю8mEў8ҏ}*?Jv$>w׊U+fRo\K3p j62srFr+VªVvy+cy5!"1<B,ΕN@:7/fefMlzn.YYh: /41Z o)vA(@ <Q:қB`%;=3 e݆t&zƹ~Iy4m<: W*y\10Bp.hSXXPګ@c(Yd3ژĻwF(eS=3HbFRWN]])#p2#4Qy$E8% 1{뜔&hNCtȀUP^򮸲0QRz (ynͻlQmRFq[HL7Jռ/'EQϬZjWs-CL3\jPlKx>`/Z2VXk4$Q'yTYbkީo7bQaOڦ}9yiB"ܖ~1I_IAث+ _ϪL]ưMWԕGW #y`x`r ij gvڎF!ѷ&0 ynyj& G3շ2Gtow3GT3 P(t6@>Mgji8y2(hʻj~ rqIt=J:&6 G-Wլ7.3"o^שg=<gD3mYi3X== gD#ZcFhšT7A~e0p)#{莬"AXE/_jtfJY"xվhyQmv8]vaB=l"7^jmÐ.1L-{wKIm33Iю<y>7lP >=hb ŗ۷t.^28/|$E۟pjO>O״RQ`n;xyd{joO\5h?nSWHl ?|fM4|\wqZ܅_?wY): -N}+dTbΖΈNgj)zӟC &d߸R5L]577F  _Lk_:|7"~60x%m^37IkC&STf8=jo|'wqu`u_I% Ȩ `@{pJ8(+pE)䴅F.E lm L(U郒bbMSn6'?_ܤMv.:3hN9/5/qlh6uj f+O9FCԖ{8lόi kazGYHuuA;Q2dE ^bA+-"(T[ {s5WWb;x]f "RFLZy,C.L)  %/LʻI M!+=,6}9_\G)oanWuad@#*r[eCByURͫ>J 6U\bCآeflm0' 2ppsz~ }9;SL[=φ$4C~&x=):Z/^C `QEv|SN+9&O%Sh%%5KE[`O\-hO+5K9;ӿOYE6VD@xjMu#=ԏ’*C++9?\X>0A00Mynh*qmM̌hIϗW *YZ.2̪ 0L*3@)[zDRc@:b,|U fp)j'SeޤPo]0N䯅u—cJp^ ́5e+Yl pE_mxzb'UHӫ,O/$_~r~Nm"|xĽwjbL1파H2|62包 U6!;ip$^j&賲,qAxV{m{;|meBۯKi@eU. )у/k+!%,-fֳW0L))/dDc !8(uƃjgZ\H>䱠.] I#y>>y,+JY>Lk5kln핊-a''iozȣ1eO93_5P3a3327nʻE؄z¼zLqfh3wGa{bރ| Dhh^\3Z9wҔ;n~*܆-8J-aベC N%-kJG"k-Yjp%oZ.F/|4_qd#xZ^hB*A6h-L4ey%-gc RKy&AP+5ԤRYj?:#_^ / [LfځYf)`pjjJdg2l&G]sJGg|C3pXyxf7tp6 @#$NtJd|Mud|cM ~aeV7|,ms1'.&|rC/^Vb9ݾl˪is-ys1[GZ~dOvX3r=S/_ԷTbܨXek(Jf']9#K-YnIj[=:;tygRk0TBJXRJI}UpmVzV0J..e֞tK.K+>#7Kl'i*JՌ+]gWB7dJ$9ֳJHq>^UR!e[ԢѨeKljJyH1jߏέ42="0$},~5q T J煎jjh*0NMhx!,] Jegɼٿ=PvjX,5w,ͪmا!KA[NW)R (??#mXnjpv^4?C7+wGl D> N^5 ӠpCCv!o .(is-Nq7"sl10" %Nz3fEk-;'.d,62#::E=c0Wn=P%rDUU,LZǢX朴ʦAY)m|GmS'iH[̈́lFd+REӸ*|it\cF8I%(t\,BJ a5w`E[fB3Θ}x"2Y0s+D,% Һ(5GGB_ ӒHI={bL϶]ުrWU|Q!#}h32Ƞ(Qh1@R*EPTq=D߭֨!an݊/В߽I~ Ȭp3{(GȽ;߽5tj?h#wC7吒6[M7YmM]7\/E+KsD3EkM=2 0R}O  ֌ƝK׮}iKɤf)!M SǣM 0 ;:;_ 7˦5%(I.=VXҧTOM)hl*WϿܼP}FF' =>-fphlJ=/}_6ff_ix/Mˌ֐:)7D]4Ƨֵqz/lx1bqgDr/oSrERxw(Bs/iubEh۝8ft?Fn)<䍻p"$n[u>8sCRyG5X~s)oRki՗I%KQ6e^zJ=^j&˪nJ,fR[c`m ok:/Eul/祲a(׀z^*K[ss/=-m_j:_:5y)i/eu{ڸ^:׵j^"z%z)b[!C%R4'1KShѝX9@tfUOXo>'DGrgW۠;R,=[ͽ8C3sZHmFnw'lpI:_9<$kY#q2qM-6k>7{GP-֏wWo:|vqWZ8Ҷ|nch`68w={0@^0ZXD,/Al&ef/;ATEEkg-*ڵtnꐶ)M'.H9yvTkOrz ~wžh-?m\q{0tg]/n*7z̰]2YO%3ZFYck< 7WOnb$idQ#5\y:+9~B'n¾UH'PP|YAr.w҄e4plUy籪o] t6X שKZA"j 0Lq(C7gwZHb΅ åFᔘy?H^Kv`ρ-åfqw, ݓ;cX 8ݥbmƨ5 Rxw({M ubf̾Miz-q))3b1N3Bی'EcC޸) $n^[z>ռsk=2Ƕr{7R[k{/zfbל^G^/Kl\6]D#pdu4׭3])69q•@iUmdAQXe2Ix66'p6^G ,U!{evy`+&)]Wb٠'Va$J,$٭C(C8`r:(0JJs7{nmPuΡud0Z+8__OX (Q ~\ePOwM(r|6:1(<,/^jj eXY&:zb-?e̖dbQq9h_G8Z.B,RȟߨdDlol\Β8dk _ZD 9sLSX:xS2]UgWϑj?7ڑ-r*3,.@J-+}B9b#(G^{%(t2]a( QiDl,EY̏Uk4SѼC0+i֧s9^K/sRY.k$7=LJ:ۧ;u xl#Pkge~^+ucG/Ut2 976e5m gMe5@Qh9͛ujVzŽJybǀt/+3z>ΕdZ&-ay2s)! vE5ȥAXf\`0,vh`MG(7LmXODf-BtDÃ6(ssV<,F B,nԛtp 9X(x 4ri:Ju|* iXD患9UEO:"! ZDu0tj j S㤦_0 å κ 184Ʌ%Hfe#->Fꜳ])uM3BؕM"fW{J)<䍻hO1z/iۡn1bqgDƦNLw1[ y.S$3νk~ubE8zRxwўO)O5X3c֚O1crpS dukE z^X}ZmYj Ap_wqRQpUhjrZ۾Kh&L6i(&WY4.,ʂ]u(*0IGj.b 6=IiFS{ͭNmLbhLwKah9tɏmͶSf\ft4%G6MxX$4%AE\hJ&C 13k#O,z21ke%љO9\rR[cz|q`rTƯܩ08SNGGdT+e:W?TV?ǹ=gh3kL:(x=6!:[)4ډLqwYջCãf\Sc7v!i y;w^40/~@95G,s#XcJsbf|$Ѫ_>d 90) evBfj}{V^3%OU Θ3~k[~kxXT5*7~sf,mf&ӛꐼ U64\8]%1YGHc%L+*&c?jcOnc$3bQ -ɢӃ]L>,忾ٌV(cs__±'vVX:闾ܹ$@FZ whAw+_vIKVȼv*QB2,edüliZg*+p!$Ei:Q,*Ֆ2+uQykm#IeI23mޞ==\vASb"$Godeg},򋈊WE0qفq\P$KrDUtU no%J_em $/ɸzo@tK2ϿܭԚ> (hm\7 J']/eh@(V͎=gC)!r?+ʋ b4w͟vv+i+8)Se],pzVYdϴ\' WT/5x)H ['Žy>H:42WROك=h(*13h| bGËx*=ڵlA$I%hT->hY|'cV=/G`U7ɾ/&O. nV^'-r3^Fx]ח}69W29c*ONJrށV2s' hd51dE{9jOj?8訚0Ipj{XCnG (Tr+ttajH@I՚>{XffZ=-R}uWvǻF=ڪbL%6JOFx=Għ&eOJ(N,E,E,E,mܭV%kJ ((l}V:5 "zN!h[Âw_F?V˦+sܫѻQ)]xnnWujbo#j릟f,KMn@4ukTU' ú3vn,$/G b-rJ9J2(!:2p 4ZČV8@ U%XhF g9hK|w#eZ KV b `'|e5ce`mS \d3܎}^|&iy3.pۂ8)ҖP֥O1FS!p'[XTr$b_r1V-: 1KofouܨaLWx)'-aqk)\x-'B9IDrX3QчMN~>Z~N~ΐ 2EW7R]$Pos&i8F=ر kߴʷNf4[wvkxZS*,qК朎d%kb\dd^%U4ID qiѪpnIH5}DH,H+ @!$ZݤRzp9Ę}ʪ 0zM=^Kibsa `#CT/C KYڠR x=^+( 4d`zxpNAl{pQR8<҃`+%ЎQ%V)l~+r%2[l yY5C =4KhEk!~ i+gߍ<ǽD4o߮g7߮h}iSd@)efr|s?FqM_L\!~n]Eeod>ݭ'ܭޏJrL%匟כ_*5kwe՗cֵ6JPAﶵZ\ EH{gY-^xo27- m&D|HZPS^a$^ViUe.>I {Nc:Jym Qeg/O;$3%߹I9W҉~ Z\iBOSEy8a`@TQ+кE#Aj)ΪO_˪׬LVx%ӠY\.\FR=f*յj5y0ܿ6 WbWn0?+'MD)" >d<FM4i5u'XM^ŏg}22õ90 e$0BjVǗxnڃyUoqpڪ9Uњ[^&\2"e4YL,bPE'N),=_Ac2tz5C[6e FĘu{-S L98( Y\z>ލw'E!ޔ|AO_; WUyQkZֲA]hɥ9Yr!Ї[ S NZfU u.[u):G)N\Io@7 ?]eum TЁV[ '=;{勼 !m!ڵfiգZä{ﯥ{kVx.[o!zuבBXփh=B/H{Aܿ@sc'-/0$)~p:3BtFF?m-6S !!Py Y^qJ7UH,+m!gJ-yTk,tx.T܍ٖ%<7_ꭀzQUbɢQ1$1(ʜȼ1>5"H/-ļ\)gQ>VK2fBr$ōQ{EV{\I;@$!D)p !ȁRt 'þ(w*6_C*eb@r5G{.?D2Or4EԻ+ոEfkWqb"DQm6~,'&tx1¦'=gܡ'J%K[#7l)!xRHROQ9i2tbI[ŴU/Tu,1\}@* Wy*"3|db`#d (Z{-oKR&1̗_J6f]Yz5e5nV{JM MtZ/exR&|y6L{B'(E]dT < zqM_t@{x+CR$/ ɴPصXnfgK_-U Fs` fFGՆG jCdYNe}#CX_oE O js95<È@z|:xL΅iP\~Z̗.P,Ӎie2(HN0 .&@h&WK##l3~ke%@9T+30˹[v!3a5W6$ju.yAD% "rn0 2iA+gzLT$&E`ۈux5 -qP"= qr\uhe|\~G/YjnUYV`=6VK:4I:R:}CP̠{=sjSg:/?~rD~B#+cʋIޢ YƀfpWfҩtDgp ˈrm徴_NaBփFI`&~p3e2 @$BD Kx@ }!jj,U8iO55Mr6YX9 ( Ɖ(a2yt%>=Vgl ~ӿ^#W(½DD (*ܣk&=LSIӼD"nsm\yVeR()$)1,d4KRA:M^~_:ՌI@F?>΂2E bI@WzxeوgʠNZƗT9-96l%t^-`\yr?DhZF }=(,qG!I*zkww׶Fά_%ku/W/b;H{{~37mZ _K*Dkδ8df@3U:TFQ+Z%M/xa{^<]ߜf{q6S@luP=F$MH/v~8pl̉0=csICqkCP"I%}\9䞍uGЃ@<7͔1>y\o c-J4~hJC%_s ME}{;XRI6WHvXfOv_N6Յ$iwc6)l,1]:]|82qv~O<.CL\1QP[jFkWn/KX<z`Ѹ&Q}Cd.,c_=F+XmjP_`0zŹK-j@%ҽ }1m ЂC4#y+֑',3n.f 䊸dUIj^t-$b ͩ喼'}gU|v1٨ ysA"N 5I&io~֞2bRt?`ueMSUL7xr1cIT8/;o7j{`iַ Q2IX+Z4$[naQ;3uhvswe4: +;UE'7#y~u#y3h!9|fIpR:f قˑcKŁS1f8"!NIZBRn7v'Gs·>ctܸ=k*!kM MPb'לL܌< =I`eX\Ղpm:.P1ZU{6*lB݁<˨$8-uLr{y5oT8;,~ n&g J^lǍz.5_x|2DI\ò!yJEGb97cx=1(w'I&gڏsX`}XL^ y Q6mW,KI}^BΉY=A> dS1K0 Cg-M}8p]Z졵ۻ5llRપJNbTl⨉RFXԔ)J«[ W6=+K,#D0Ddc#8|ֱ4q M]A7I~]q 5f Q^EH<,y,mMmVu] 6xHl'NBF'yNiOfY٩\6 mu}zlWb};=Idա/ڑV 4$⃩Ƣ'JfbOYaaٖb AmSbIqڐ[. ,4|YOs \%>>U jhJ&+S0q%+xL~kx=MN_|@>p^khˋQ#UTB"[ث\[9z8.5N׺ƛzO^d$en3c)I48z5D*I!NI"x$dͧlFX} vM}r=68{:Yc`y&ͧq2 x}:Ȧ.ד\:Y#`rU{ yG>SmJ?ZfeC{/FTUmf^5nxDŚ3䧦fҨoVm߾v4.-k)lTXt3_>zE6./99n oGozbbX)S"YPVJ$Q\JP ΡOwR g^f;1zԫWZM!L]JlFZ ؉&h CX'}~` 2V9S11 1Uyp goK EGw~"+ dTd\U_``ˢߒ6 h$C@=ldMO\|:d8дK]a^ ?5vv4%M|aX3f%_cx lߞy~Na#h(s(Cr!is1*TjJEw;O俸d=U; &-ߑdYňIIx-XH&ELkbg K\zɇt^2*~f%N[8Έ8~Q|fsUZ>jPCٶpJ1~JZz`dz2'> l϶{K!\ Xg1yhTB{N.ETHwff6wj_%$Q_67wa樒h|#?`jxż^] _MlLT fK*ǚȇ`NZ#? uXUnoKX-]WkJۄٽ")rlUb@bL%͸:?g`gsyG^.qnfn]< hMPa:j˪zcr9Ivq_Ӣ#ѓ 8Wq}KڿJNj|;XN9%g^%tAG F }Kp[?6/٨z@9M |U~)RV~"pO#S~6f%~ɧUn8Vq_G3ʉbp@HC뼍TX"i&-vPbڛ296/чVGOE?l,,w){_\~R|?`$UH4MU r-!kkf+(S&l™ZO]{2E )+4V+.VP%!Jjc[vůO~$]!} YMO~E:bHlS>&]?X:^_ ^?k*:ŃIEVjrI1f痛^0z8~? $Way7ڔv}W X#NMk]g_[%@ 0Bc2K $KAoU#w̟;BgL|A?aJPۚ$ثmjk9{ۍT{#}@~|ov#y>Ӂ/(?7燢n;>Fqo-QlA8`!H~ɸ ^;^,/PWCN^̂(\S =,W2U @3`KY)bhr0^召(NV,:RA4Y;?ۛH3O{aozcQ>J HU>荋/`|`ޙGEwrW1_\>ݎ7}$%j\X=,ȔokUe<%)_{{؊?x$=MoxmDVs&V _:&e%M:*(_ _&.>'\,w5]\_S ZAG WE%ᅸ7 6נŮv0Ծoc{=Jm횹^Vq?/W Q4 ::ߏ] l_ƌP36bdcˣE\ѭ|jD($kH_421.~:%K ;nN?t 0lIe2D<ༀ_b12~v3骋z?"o *u[0,~f,d '쌻O{M} e> .gXEYK(\4o%b9_Yh lZo:{%Vȗz#~??\e8W%폸g?ưg'{G_.Wۺm5/vyJW+uś$)Kۦo}Yū޵5m,鿂S'~Q6Z'vv^)`.c/uN= EB)(*E@tzXycgɛW?I˓Df`'t4~S+=>{ F?>SRWyӛu%S;Jnq%]r+!C y !mX+x1`"r^#޺q?o=}y(o$*RI6~Ґ#Gn (;om|Bd&̴;#Ly#|S/K'דw DB Tp#:!)q!ObRH (j8HEVӑś9vzM0_$F{ =8JzRFx_Q_1缡3sL/yk{Cꛡ$V .Yh|*j#&GBWP͜UA+5SM+3_ӊ]V,r}4ZJz(LVhDR Qсx撰d,z6TIwt̼n.g|J,~sA1r:@A[xF)3A]2\p^}^G }`RIY0)4d\ @m_NTQs(G&KOSfQ8_3| &ұw'IHQqsxiȨ$WP %%=SW*~5'#og"Yzc*SO$>,/8wTPA5pMPm* 2nUF`PGV2̨֟Olun-|jᾊ֟eZUYjGDyF ??Kr"a5A3aZ;s&M(!lwf V!TxEIa|f٘XŻ41X4n:@l+:rDXcJ+a21 'a8&^Ɇk'yh:%&op7̔h9bK3VfWݒRALcp0j4I 跡SqLf;ݐׅ ,(D p5!vKNjɞC^^\g[8-û7W?xo( 3uV[?df<+ ʂG9J;1!L,cV:LUF3"=Ǣ5KNݖXQGK`/.>΁aX= D!.JFB4Ӛ3̑$X(cYGqewz1tՁatt-ZQm0kX׮YOL~a8sw%e'kz˟:̬7/~|2T?m(a2ȤpBiF6h?2dh#FUĹnB"O{p# c 1M=0o uFqIq[ ̏#k+5FVx ƭBYg OaN BLJ`HOjIc)kC\fE'I>SB TEE.L:yܚ:={{݉< 9}U/.y-NleWQ7ootNcyJ!WzKSH<]}~C2[*-vB㇋xd ݬ׃B>[ DI Dt@pWcǨH3$Y 2xd5m1FcUbD K6qrl9ƃCTi nZLtBp R3$n1iӓ6V4ұLCYM{t$Y]Ҽj&Gku5pEE^^Ϛ}Zrm(8oC{Fye)Vn~[*㫠D#w;U B,YPFy1 Oy)J"䆝L(Ϻـ\@&%L3M0Dd4dVr?5Hgaj]me1EB,b *_ 0)*G&\KecÊ2鱲Y"T 7l&a+? U`SG^U< _nw  76~NJL F w7C;óWjo.WDJ9wc*6CHaj^#%,J)U}zc1d+WMz: 8:DCek@DŽfE|fBތ/v5~휞3Z [2Z/n'oѸS2]ق\[kƾu?J&毉Ȱw?:Ÿ%*2.l\UBh^XjW-B' RxSM@0 mBjW4/#8H*GTd[jm;FSՅ*BXXNu#,,:z] WF!_A8FryH|fRkDDan%kG]jObu['~?^\ lvQݥ 4=w.54;4./ӛ&Ib6)Y$}¨~<>oG6~amLϓ߆q؉ps=Nc YͧjkY\Z/c.yT,> $ p|*pZNRM>xK!gΠD7/zs~Zr9! y&ZǦ zbPt꾣w{ 8- #ۻկ[һ5a!/DkT|C8z7}wOSx1ݞ~ܭ%[MtͦHI"X 9U3L~NDsyg6J_8`1s٫ūc6,޽0 W_M-u X熢7XѾWظi3>p5gbl&E~yB\G?!}4(eKd7r{b >H-ѩVZU2?YJԚ2QJYM}" {.F!i;Fˆ6׋LX |skfKuר#2NW+)+\r{?Md<H (䔬E~JvW.%N+YUICJ,f@ l3+brH {nv"S4߉dw4a\vSj: kbIԒ`|c!?c6'2{E zC$*A:U#Y핱BW(U y9G6)HbxP9J9-<"-Td,]r-(@L<؊HׯUK,ʍVrOܜ"|pL8n)'J!U8\vAr06P {- X]F.P']Aeӧ .aLN'iܱO\})\˝ \Q+DJ)3&]*J3Qk)$ yK`3=޶c*L Xn)݃{D:r KVFce>ƔJ; du&1Ԛϛy FK¨ۈk QLA0G\*G@+#4{㗄:6JTN[Z5=ujOhFN48"")+=; u AQFLIBX\<Q9dLn2iY!qxbgT :(4t+~vʰp0bLJ/O8gNL^wѯy݉ qyͣI(_^"W7[<_y/}c]Yo$9r~_!̋U+AC^0Ůbg}cgߗ*SYd^%fnu2`Շ6m#䦹?W?Ͽ,`?>~Nz>ӏ3%% ̽y~I<> Pg~_Yc]~6К |%'#7Oz b= v v#`y܎`W҇_~dvIhG1(=`?AI;@qz\Tmf3{Ayzi@C™<7pFi=lNw~O,)Ӭqo<~ƴSB7SxXgbkQ09f۷݊beQڑ^Kߜϊoُ,E 㓫H%ftrYӠ)V/94aQ>*M!V"5f>^I_wvĪ!qw#usqcu(&>Ej#JlzS'V,'Nih '8#rf%_.5zH&L_:`(%#k!`>5qM)W?SE7 Tp\'1mPJ1[tK@njncxw$285on:N;bے-7}*|*\A)]TtRK)EyF812&& lId @NbLRӗ!zCTY}MӫRey^휹KikT`|Tf[&+Y;loZWLE]I$+- irQYkfJaAܩ85# ??l>Y>b>*:fi멺-A/PNsR,s/;*=Վd>wˇ}slcwo[V5àWk_JkZ8)FF}Ew_{XjҲePfeA3U$Qy2kDX&? %H'XgӺEf />=~}q/{j*AVKD &4IDž$;P*r+AӐ걖0)01K#DQPb % &i\\*L'4K4O T5vq w"u#j *%waDx8ެvpP טƅ# ok%Yk/Z$TAITdɴPa񓕵1Oa{;{YX5D]gnπe~36QX4lMC!remv=oqJ6hf/ƕ6uRZBAhĜ!&jtJAqsg]aG0;Dfh)妃kJcDTr y@# bmvj0󇇿.K=XʝO4D6ؾ%~@b5w,ʢ*O!f,qE1|lctw4;d-ΐNKFe>9!mysC)O|5݇#hK jg]KǪi>,yއ%y\P~ـվuejdW sThsE%(I-2] U)+Rϡ7ޙZv}J^ޖL}ա\* Y hޖ/`WJƕߙ/p}/NQ*KBڞ6S/K*8;M,V@ծP#JgWQU#5M[j4z^:ާ#bplFH(b y.Scr; %q uK|Gnr)·^]-C޸)D1E Y`7{rȂ,R"q Ӎ}y$Z"f7âsQ*gUU-]^;pQ̳XV6kTgBlHfkoHױES[:$((%H%u.*. (_#d EyF& Fb'0;?T=5Gw#ԡޢ0^hQKxE,kwi`&Os|.r"EhER+M9-v?M=tZߘ˦ɦ~I1V:^A>&KMϳ3(A<6AUyI`Sz;bgQvʼJjURc{L;Qƒ ƈq[`=sO2b26u`.) P+ԁ"Rت" Nt5R;U C;йmo3E·?94 ON,Ng3GR9r8W󏝬3YdÍfc ZǮ)oR&ewc/d"T^]|]jN ՘R ΩK(!]-j;BBoIǷJ^y`5 8b|g]]xo=,¢_d<זv[-#,FI@\1UB[{cUR:WO^7K*>B$n7\9PcQ;1 } sJa5n]5Oq]0iqS,^. B/~d +Nq`9:h*D+,Ee5eJB啓\UJp9eN֦Pg(j:VMT5czY޵dU@)T>'w @2BD9le̸:[IJYFgDNHCHu,H0Te҅BL:˕J3SBԪtS댲‘CZA1eeQ*P֜},ۇ,SvkVqXXXgu[+rB#Xe-eVCQLVd(rpe N\xo}Z{ @ Qh'R\7s .m?kz& bdn ၼ./[32|f#y o{+0:?sFX"@ޞ]#h@y`+[s5 Lk'HRǓJ5}6Twߎ~+}"؏RAFO&pP?>r=u+'}o__}E ?\ n?_[-3הπk|Z꛶VKySXN͆ 1$-f[|4iK&bN>Jn =<R'brZߏUɜCFًyn9ٰ8 \*YyLք7Pq5Έ>8>o: τ>g-%H|cÃ6}lzv+v1 XEAjtY'Hx`M}:%h uDD|$tQCG炓94!tKm/:6#U܏6>S =w}G#R'=Ijg~jKlHF:OI n|nĩ8\Kj͔ZoJu;zKS x;bxg'"lwƇM%l*[bK@ߩ4'$##􊉯var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005374567615134737056017737 0ustar rootrootJan 23 16:52:45 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 16:52:45 crc restorecon[4574]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 16:52:45 crc restorecon[4574]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 16:52:46 crc kubenswrapper[4606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:52:46 crc kubenswrapper[4606]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 16:52:46 crc kubenswrapper[4606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:52:46 crc kubenswrapper[4606]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:52:46 crc kubenswrapper[4606]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 16:52:46 crc kubenswrapper[4606]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.188222 4606 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192197 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192222 4606 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192229 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192235 4606 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192243 4606 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192251 4606 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192257 4606 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192262 4606 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192268 4606 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192274 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192283 4606 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192289 4606 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192294 4606 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192300 4606 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192305 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192310 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192316 4606 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192321 4606 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192326 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192331 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192336 4606 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192343 4606 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192350 4606 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192358 4606 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192364 4606 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192370 4606 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192376 4606 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192381 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192387 4606 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192393 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192399 4606 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192404 4606 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192410 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192418 4606 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192424 4606 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192430 4606 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192435 4606 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192441 4606 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192447 4606 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192453 4606 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192461 4606 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192467 4606 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192473 4606 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192479 4606 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192485 4606 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192491 4606 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192496 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192502 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192507 4606 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192512 4606 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192517 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192522 4606 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192527 4606 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192533 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192538 4606 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192543 4606 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192549 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192554 4606 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192559 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192565 4606 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192572 4606 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192578 4606 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192588 4606 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192613 4606 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192618 4606 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192624 4606 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192631 4606 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192638 4606 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192644 4606 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192653 4606 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.192660 4606 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.192955 4606 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.192970 4606 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.192980 4606 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.192988 4606 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.192996 4606 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193003 4606 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193010 4606 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193018 4606 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193024 4606 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193030 4606 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193037 4606 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193043 4606 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193049 4606 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193055 4606 flags.go:64] FLAG: --cgroup-root="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193061 4606 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193067 4606 flags.go:64] FLAG: --client-ca-file="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193074 4606 flags.go:64] FLAG: --cloud-config="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193080 4606 flags.go:64] FLAG: --cloud-provider="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193086 4606 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193100 4606 flags.go:64] FLAG: --cluster-domain="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193105 4606 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193112 4606 flags.go:64] FLAG: --config-dir="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193125 4606 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193132 4606 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193140 4606 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193146 4606 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193152 4606 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193159 4606 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193165 4606 flags.go:64] FLAG: --contention-profiling="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193171 4606 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193177 4606 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193184 4606 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193190 4606 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193197 4606 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193203 4606 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193209 4606 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193215 4606 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193222 4606 flags.go:64] FLAG: --enable-server="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193228 4606 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193243 4606 flags.go:64] FLAG: --event-burst="100" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193249 4606 flags.go:64] FLAG: --event-qps="50" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193255 4606 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193261 4606 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193267 4606 flags.go:64] FLAG: --eviction-hard="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193275 4606 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193280 4606 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193287 4606 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193294 4606 flags.go:64] FLAG: --eviction-soft="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193300 4606 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193306 4606 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193312 4606 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193318 4606 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193324 4606 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193330 4606 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193336 4606 flags.go:64] FLAG: --feature-gates="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193343 4606 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193349 4606 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193356 4606 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193370 4606 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193377 4606 flags.go:64] FLAG: --healthz-port="10248" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193383 4606 flags.go:64] FLAG: --help="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193389 4606 flags.go:64] FLAG: --hostname-override="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193395 4606 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193402 4606 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193408 4606 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193413 4606 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193419 4606 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193425 4606 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193431 4606 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193437 4606 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193443 4606 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193449 4606 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193456 4606 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193463 4606 flags.go:64] FLAG: --kube-reserved="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193469 4606 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193475 4606 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193481 4606 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193486 4606 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193493 4606 flags.go:64] FLAG: --lock-file="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193498 4606 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193505 4606 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193511 4606 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193520 4606 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193526 4606 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193533 4606 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193539 4606 flags.go:64] FLAG: --logging-format="text" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193545 4606 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193551 4606 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193557 4606 flags.go:64] FLAG: --manifest-url="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193563 4606 flags.go:64] FLAG: --manifest-url-header="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193571 4606 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193577 4606 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193608 4606 flags.go:64] FLAG: --max-pods="110" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193615 4606 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193630 4606 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193637 4606 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193643 4606 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193649 4606 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193655 4606 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193662 4606 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193675 4606 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193681 4606 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193688 4606 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193694 4606 flags.go:64] FLAG: --pod-cidr="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193704 4606 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193717 4606 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193723 4606 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193729 4606 flags.go:64] FLAG: --pods-per-core="0" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193735 4606 flags.go:64] FLAG: --port="10250" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193741 4606 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193747 4606 flags.go:64] FLAG: --provider-id="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193753 4606 flags.go:64] FLAG: --qos-reserved="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193759 4606 flags.go:64] FLAG: --read-only-port="10255" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193768 4606 flags.go:64] FLAG: --register-node="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193774 4606 flags.go:64] FLAG: --register-schedulable="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193780 4606 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193790 4606 flags.go:64] FLAG: --registry-burst="10" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193795 4606 flags.go:64] FLAG: --registry-qps="5" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193801 4606 flags.go:64] FLAG: --reserved-cpus="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193807 4606 flags.go:64] FLAG: --reserved-memory="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193815 4606 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193821 4606 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193827 4606 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193833 4606 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193839 4606 flags.go:64] FLAG: --runonce="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193845 4606 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193852 4606 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193858 4606 flags.go:64] FLAG: --seccomp-default="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193864 4606 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193870 4606 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193884 4606 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193891 4606 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193898 4606 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193904 4606 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193910 4606 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193916 4606 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193924 4606 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193931 4606 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193937 4606 flags.go:64] FLAG: --system-cgroups="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193943 4606 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193961 4606 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193967 4606 flags.go:64] FLAG: --tls-cert-file="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193973 4606 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193984 4606 flags.go:64] FLAG: --tls-min-version="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193990 4606 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.193997 4606 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194004 4606 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194010 4606 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194016 4606 flags.go:64] FLAG: --v="2" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194023 4606 flags.go:64] FLAG: --version="false" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194031 4606 flags.go:64] FLAG: --vmodule="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194038 4606 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194044 4606 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194217 4606 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194224 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194231 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194237 4606 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194242 4606 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194248 4606 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194254 4606 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194260 4606 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194266 4606 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194271 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194277 4606 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194282 4606 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194287 4606 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194300 4606 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194307 4606 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194314 4606 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194319 4606 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194324 4606 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194330 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194335 4606 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194341 4606 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194346 4606 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194351 4606 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194356 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194362 4606 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194367 4606 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194372 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194377 4606 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194382 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194388 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194393 4606 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194398 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194403 4606 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194408 4606 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194415 4606 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194422 4606 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194429 4606 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194436 4606 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194443 4606 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194448 4606 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194454 4606 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194460 4606 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194467 4606 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194473 4606 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194479 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194485 4606 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194490 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194497 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194502 4606 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194515 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194521 4606 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194527 4606 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194532 4606 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194537 4606 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194542 4606 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194547 4606 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194552 4606 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194557 4606 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194563 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194568 4606 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194573 4606 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194578 4606 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194583 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194610 4606 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194615 4606 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194620 4606 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194626 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194631 4606 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194639 4606 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194647 4606 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.194653 4606 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.194662 4606 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.202619 4606 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.202648 4606 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202794 4606 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202805 4606 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202811 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202817 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202823 4606 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202829 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202834 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202839 4606 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202844 4606 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202849 4606 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202854 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202859 4606 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202864 4606 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202870 4606 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202875 4606 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202879 4606 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202885 4606 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202890 4606 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202895 4606 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202900 4606 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202905 4606 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202910 4606 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202915 4606 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202920 4606 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202925 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202930 4606 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202934 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202939 4606 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202944 4606 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202949 4606 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202954 4606 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202960 4606 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202965 4606 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202970 4606 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202976 4606 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202981 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202986 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202991 4606 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.202996 4606 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203001 4606 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203006 4606 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203011 4606 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203016 4606 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203021 4606 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203026 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203031 4606 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203036 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203041 4606 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203046 4606 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203050 4606 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203055 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203060 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203065 4606 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203070 4606 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203074 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203081 4606 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203090 4606 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203097 4606 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203103 4606 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203109 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203115 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203121 4606 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203127 4606 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203135 4606 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203142 4606 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203148 4606 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203153 4606 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203158 4606 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203163 4606 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203168 4606 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203173 4606 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.203182 4606 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203339 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203350 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203357 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203363 4606 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203369 4606 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203374 4606 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203379 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203384 4606 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203389 4606 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203394 4606 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203401 4606 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203409 4606 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203415 4606 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203421 4606 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203426 4606 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203431 4606 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203436 4606 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203442 4606 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203448 4606 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203455 4606 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203461 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203466 4606 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203471 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203477 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203483 4606 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203488 4606 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203492 4606 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203497 4606 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203502 4606 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203507 4606 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203514 4606 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203519 4606 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203524 4606 feature_gate.go:330] unrecognized feature gate: Example Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203529 4606 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203535 4606 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203540 4606 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203545 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203550 4606 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203555 4606 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203559 4606 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203565 4606 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203569 4606 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203574 4606 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203579 4606 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203584 4606 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203614 4606 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203619 4606 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203623 4606 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203628 4606 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203633 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203638 4606 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203643 4606 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203648 4606 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203652 4606 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203658 4606 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203662 4606 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203668 4606 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203673 4606 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203677 4606 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203682 4606 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203687 4606 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203692 4606 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203697 4606 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203701 4606 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203706 4606 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203711 4606 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203716 4606 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203721 4606 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203725 4606 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203730 4606 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.203736 4606 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.203746 4606 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.204198 4606 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.209862 4606 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.209965 4606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.210582 4606 server.go:997] "Starting client certificate rotation" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.210629 4606 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.211025 4606 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-15 02:26:44.148507953 +0000 UTC Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.211105 4606 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.234170 4606 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.235374 4606 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.238078 4606 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.244972 4606 log.go:25] "Validated CRI v1 runtime API" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.267067 4606 log.go:25] "Validated CRI v1 image API" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.268564 4606 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.271082 4606 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-16-47-04-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.271114 4606 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.281343 4606 manager.go:217] Machine: {Timestamp:2026-01-23 16:52:46.280381829 +0000 UTC m=+0.280574255 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:843df5bf-5299-4274-8c79-9400b122a934 BootID:0a4ca771-bbec-40fb-95a6-5cc937c9c7f8 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:42:1b:b0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:42:1b:b0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:34:f1:e6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e4:e0:82 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:eb:f6:6b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fa:10:f3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:ce:d4:44:81:2d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:fd:96:4b:32:cb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.281563 4606 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.281776 4606 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.282390 4606 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.282700 4606 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.282734 4606 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.282911 4606 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.282920 4606 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.283123 4606 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.283156 4606 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.283372 4606 state_mem.go:36] "Initialized new in-memory state store" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.283749 4606 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.285486 4606 kubelet.go:418] "Attempting to sync node with API server" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.285508 4606 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.285528 4606 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.285541 4606 kubelet.go:324] "Adding apiserver pod source" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.285550 4606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.291522 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.291573 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.291669 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.291702 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.293194 4606 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.293656 4606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.294554 4606 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295474 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295503 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295513 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295522 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295536 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295545 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295553 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295568 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295579 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295588 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295620 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.295630 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.296068 4606 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.296909 4606 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.297233 4606 server.go:1280] "Started kubelet" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.297667 4606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.297692 4606 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.298379 4606 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.299070 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.299099 4606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.299326 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:58:52.586688351 +0000 UTC Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.299555 4606 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.299579 4606 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.299703 4606 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 16:52:46 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.299946 4606 server.go:460] "Adding debug handlers to kubelet server" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.306339 4606 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.316800 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="200ms" Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.317021 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.317246 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.316814 4606 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d6a5aac52eb88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 16:52:46.297189256 +0000 UTC m=+0.297381662,LastTimestamp:2026-01-23 16:52:46.297189256 +0000 UTC m=+0.297381662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.321678 4606 factory.go:55] Registering systemd factory Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.321711 4606 factory.go:221] Registration of the systemd container factory successfully Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.324082 4606 factory.go:153] Registering CRI-O factory Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.324115 4606 factory.go:221] Registration of the crio container factory successfully Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.324201 4606 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.324228 4606 factory.go:103] Registering Raw factory Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.324246 4606 manager.go:1196] Started watching for new ooms in manager Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.324823 4606 manager.go:319] Starting recovery of all containers Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328032 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328092 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328104 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328114 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328127 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328136 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328150 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328159 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328176 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328196 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328209 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328229 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328238 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328259 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328269 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328285 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328295 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328308 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328317 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328328 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328339 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328348 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328359 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328368 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328377 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328390 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328402 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328416 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328429 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328441 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328451 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.328464 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332206 4606 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332259 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332276 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332292 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332303 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332314 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332329 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332339 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332352 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332361 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332373 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332386 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332397 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332409 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332420 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332431 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332445 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332456 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332469 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332481 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332492 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332513 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332528 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332542 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332554 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332567 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332613 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332648 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332662 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332672 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332684 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332694 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332706 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332727 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332736 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332783 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332793 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332803 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332843 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332854 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332867 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332876 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332933 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332951 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332967 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332978 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332988 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.332998 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333030 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333040 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333058 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333074 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333118 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333130 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333161 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333172 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333207 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333240 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333264 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333273 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333349 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333371 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333383 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333412 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333436 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333458 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333507 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333539 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333550 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.333567 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.335523 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.335575 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.335643 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.335742 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.335767 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.335784 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.335977 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336012 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336055 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336071 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336091 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336176 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336210 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336227 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336244 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336319 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336360 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336390 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336406 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336417 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336484 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336501 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336514 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336536 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336585 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336677 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336701 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336714 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336726 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336748 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336761 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336823 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336850 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336890 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336940 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336954 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336970 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336982 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.336994 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337075 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337120 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337143 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337156 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337169 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337205 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337219 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337235 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337248 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337273 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337302 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337314 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337326 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337342 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337380 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337398 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337410 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337433 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337448 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337461 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337477 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337488 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337501 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337517 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337548 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337577 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337641 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337659 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.337708 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341656 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341711 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341724 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341736 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341747 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341758 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341768 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341778 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341802 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341812 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341822 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341832 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341843 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341853 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341863 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341874 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341885 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341894 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341905 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341916 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341927 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341937 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341946 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341956 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341965 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341975 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341987 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.341996 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342006 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342017 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342027 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342037 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342046 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342058 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342089 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342102 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342112 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342121 4606 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342130 4606 reconstruct.go:97] "Volume reconstruction finished" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.342138 4606 reconciler.go:26] "Reconciler: start to sync state" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.350904 4606 manager.go:324] Recovery completed Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.359834 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.361465 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.361516 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.361526 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.362181 4606 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.362204 4606 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.362226 4606 state_mem.go:36] "Initialized new in-memory state store" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.362660 4606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.364343 4606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.364394 4606 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.364434 4606 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.364475 4606 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.416341 4606 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 16:52:46 crc kubenswrapper[4606]: W0123 16:52:46.427414 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.427498 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.433054 4606 policy_none.go:49] "None policy: Start" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.436116 4606 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.436145 4606 state_mem.go:35] "Initializing new in-memory state store" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.465947 4606 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.495782 4606 manager.go:334] "Starting Device Plugin manager" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.495827 4606 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.495839 4606 server.go:79] "Starting device plugin registration server" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.496254 4606 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.496269 4606 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.496675 4606 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.496798 4606 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.496811 4606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.505761 4606 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.517606 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="400ms" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.596748 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.598100 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.598147 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.598167 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.598195 4606 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.598793 4606 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.666721 4606 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.666932 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.668640 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.668685 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.668717 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.668869 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.669082 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.669121 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.669953 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.669953 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.670002 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.670007 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.670016 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.670021 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.670197 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.670405 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.670443 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.671099 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.671127 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.671137 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.671305 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.671537 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.671572 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674610 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674651 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674654 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674698 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674662 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674612 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.674878 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.675002 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.675181 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.675240 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.675977 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676011 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676025 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676181 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676210 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676283 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676301 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676310 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.676979 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.677007 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.677018 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745494 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745558 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745630 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745654 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745694 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745731 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745757 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745776 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.745994 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.746074 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.746388 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.746424 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.746448 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.746467 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.746680 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.800171 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.801486 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.801535 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.801546 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.801574 4606 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.802090 4606 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852398 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852455 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852487 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852505 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852522 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852551 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852571 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852567 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852654 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852672 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852617 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852700 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852709 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852751 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852723 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852726 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852728 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852829 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852740 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852876 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852887 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852855 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.852998 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.853032 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.853049 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.853066 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.853084 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.853123 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.853126 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: I0123 16:52:46.853236 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:46 crc kubenswrapper[4606]: E0123 16:52:46.918764 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="800ms" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.007437 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.029763 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.034885 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8c859b2d8c0c79285306a66fb6c9a0d34f15c8966dd3b77dac2527b87182ba4f WatchSource:0}: Error finding container 8c859b2d8c0c79285306a66fb6c9a0d34f15c8966dd3b77dac2527b87182ba4f: Status 404 returned error can't find the container with id 8c859b2d8c0c79285306a66fb6c9a0d34f15c8966dd3b77dac2527b87182ba4f Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.035040 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.053167 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.055177 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-730d7410b462f69b0ae0a7809c88014c11f03321cdaa56c8a90d5ca1c5e7cedb WatchSource:0}: Error finding container 730d7410b462f69b0ae0a7809c88014c11f03321cdaa56c8a90d5ca1c5e7cedb: Status 404 returned error can't find the container with id 730d7410b462f69b0ae0a7809c88014c11f03321cdaa56c8a90d5ca1c5e7cedb Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.058765 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.060222 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b9cbd030bc938fff8c17890b9c45f19f214052965685ed75c98724ba8eed53fe WatchSource:0}: Error finding container b9cbd030bc938fff8c17890b9c45f19f214052965685ed75c98724ba8eed53fe: Status 404 returned error can't find the container with id b9cbd030bc938fff8c17890b9c45f19f214052965685ed75c98724ba8eed53fe Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.069955 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4c793c539eb61a3ca291e0ddea3d51e9a69cddb2b83c55ec45facc6279ce8f39 WatchSource:0}: Error finding container 4c793c539eb61a3ca291e0ddea3d51e9a69cddb2b83c55ec45facc6279ce8f39: Status 404 returned error can't find the container with id 4c793c539eb61a3ca291e0ddea3d51e9a69cddb2b83c55ec45facc6279ce8f39 Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.073144 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1cf2e8d70f020adc91d9c40feaabd332895255749df4dc41ee2531fee15a257f WatchSource:0}: Error finding container 1cf2e8d70f020adc91d9c40feaabd332895255749df4dc41ee2531fee15a257f: Status 404 returned error can't find the container with id 1cf2e8d70f020adc91d9c40feaabd332895255749df4dc41ee2531fee15a257f Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.164214 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:47 crc kubenswrapper[4606]: E0123 16:52:47.164345 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.202872 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.205002 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.205036 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.205059 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.205081 4606 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:52:47 crc kubenswrapper[4606]: E0123 16:52:47.205559 4606 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.298507 4606 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.299530 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:42:10.383895293 +0000 UTC Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.369362 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8c859b2d8c0c79285306a66fb6c9a0d34f15c8966dd3b77dac2527b87182ba4f"} Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.370346 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1cf2e8d70f020adc91d9c40feaabd332895255749df4dc41ee2531fee15a257f"} Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.372017 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c793c539eb61a3ca291e0ddea3d51e9a69cddb2b83c55ec45facc6279ce8f39"} Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.373164 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9cbd030bc938fff8c17890b9c45f19f214052965685ed75c98724ba8eed53fe"} Jan 23 16:52:47 crc kubenswrapper[4606]: I0123 16:52:47.374206 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"730d7410b462f69b0ae0a7809c88014c11f03321cdaa56c8a90d5ca1c5e7cedb"} Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.430346 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:47 crc kubenswrapper[4606]: E0123 16:52:47.430411 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:47 crc kubenswrapper[4606]: E0123 16:52:47.720483 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="1.6s" Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.794949 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:47 crc kubenswrapper[4606]: E0123 16:52:47.795024 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:47 crc kubenswrapper[4606]: W0123 16:52:47.867119 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:47 crc kubenswrapper[4606]: E0123 16:52:47.867271 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.006397 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.008236 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.008270 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.008281 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.008303 4606 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:52:48 crc kubenswrapper[4606]: E0123 16:52:48.008705 4606 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.298282 4606 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.300742 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:35:16.269855574 +0000 UTC Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.372485 4606 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 16:52:48 crc kubenswrapper[4606]: E0123 16:52:48.374014 4606 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.378490 4606 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057" exitCode=0 Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.378612 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.378743 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.380105 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.380143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.380158 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.383867 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.383911 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.383920 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.383929 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.383999 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.384824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.384850 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.384861 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.386604 4606 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea" exitCode=0 Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.386665 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.386738 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.387370 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.387394 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.387404 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.389498 4606 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa" exitCode=0 Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.389561 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.389707 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.390701 4606 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0" exitCode=0 Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.390735 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0"} Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.390777 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.390805 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.390816 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.390820 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.391807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.391839 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.391850 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.394173 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.395277 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.395296 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.395305 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:48 crc kubenswrapper[4606]: I0123 16:52:48.774004 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.300925 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:17:07.984754567 +0000 UTC Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.301153 4606 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:49 crc kubenswrapper[4606]: E0123 16:52:49.321532 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="3.2s" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.395991 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.396040 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.396052 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.396145 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.397173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.397203 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.397214 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.397769 4606 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6" exitCode=0 Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.397849 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.397881 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.398558 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.398624 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.398637 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.399865 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.399894 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.399907 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.399917 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.402437 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.402900 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.403187 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"80e945ec452bb84c6d6836d023451cb4b4ac41dfc3dd5ae4bf872ad0ed52c91d"} Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.403534 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.403555 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.403564 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.404100 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.404120 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.404129 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:49 crc kubenswrapper[4606]: W0123 16:52:49.532605 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:49 crc kubenswrapper[4606]: E0123 16:52:49.532690 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.609779 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.611114 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.611154 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.611163 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:49 crc kubenswrapper[4606]: I0123 16:52:49.611186 4606 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:52:49 crc kubenswrapper[4606]: E0123 16:52:49.611899 4606 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 23 16:52:49 crc kubenswrapper[4606]: W0123 16:52:49.622834 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 23 16:52:49 crc kubenswrapper[4606]: E0123 16:52:49.622924 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.301606 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:55:04.965508447 +0000 UTC Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.406760 4606 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e" exitCode=0 Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.406808 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e"} Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.406890 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.407778 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.407809 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.407822 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.414103 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.414235 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020"} Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.414380 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.414506 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.414513 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.414534 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415145 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415166 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415175 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415331 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415378 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415403 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415618 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415798 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415753 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415913 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:50 crc kubenswrapper[4606]: I0123 16:52:50.415925 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.302971 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:12:36.62150179 +0000 UTC Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420546 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e90b17e922b85243d8fdf76ec4f62a55261da6f90403c2693677984f71ef76b"} Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420639 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b1fd297902bda7230b9dad3475cae4d39dca1101f4ef2a4141d84ddea1cb0a67"} Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420665 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"17a20bc4ad715cef44b4f6d990a9a46abe569139af760cd8e3fe37a4372b7c7b"} Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420673 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420683 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"62e337358163bb5cedf5cfffdc2242c893467a7b1a84c375ce484384c25bcebb"} Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420700 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4af49e939899edf31373245c5f88b0d777b89d9d3f72d712d9cbbb562d0db745"} Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420665 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420790 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.420843 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.421544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.421570 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.421600 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.421930 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.421970 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.421986 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.422109 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.422128 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:51 crc kubenswrapper[4606]: I0123 16:52:51.422144 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.246491 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.303850 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:21:59.42171952 +0000 UTC Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.422875 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.422905 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.422913 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.423751 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.423781 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.423793 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.423900 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.423921 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.423932 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.562351 4606 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.812370 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.813925 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.814075 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.814151 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:52 crc kubenswrapper[4606]: I0123 16:52:52.814219 4606 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.039919 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.040085 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.041210 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.041240 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.041250 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.304275 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:58:00.374496671 +0000 UTC Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.334779 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.424926 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.424973 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.425928 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.425972 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.425998 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.516216 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.516666 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.517841 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.517892 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.517908 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.521537 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.578568 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.889376 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.889617 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.890490 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.890533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:53 crc kubenswrapper[4606]: I0123 16:52:53.890550 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.305268 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:44:30.848906111 +0000 UTC Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.427650 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.427720 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.428877 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.428933 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.428951 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.429081 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.429109 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.429120 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:54 crc kubenswrapper[4606]: I0123 16:52:54.944648 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:55 crc kubenswrapper[4606]: I0123 16:52:55.305774 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:21:59.080955011 +0000 UTC Jan 23 16:52:55 crc kubenswrapper[4606]: I0123 16:52:55.429485 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:55 crc kubenswrapper[4606]: I0123 16:52:55.430307 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:55 crc kubenswrapper[4606]: I0123 16:52:55.430356 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:55 crc kubenswrapper[4606]: I0123 16:52:55.430367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.040701 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.040823 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.197731 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.197974 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.199422 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.199498 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.199522 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:56 crc kubenswrapper[4606]: I0123 16:52:56.306900 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:39:52.024115994 +0000 UTC Jan 23 16:52:56 crc kubenswrapper[4606]: E0123 16:52:56.505836 4606 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 16:52:57 crc kubenswrapper[4606]: I0123 16:52:57.307134 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:36:28.586936145 +0000 UTC Jan 23 16:52:58 crc kubenswrapper[4606]: I0123 16:52:58.308202 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:58:19.535113678 +0000 UTC Jan 23 16:52:58 crc kubenswrapper[4606]: I0123 16:52:58.779461 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:52:58 crc kubenswrapper[4606]: I0123 16:52:58.779606 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:52:58 crc kubenswrapper[4606]: I0123 16:52:58.780708 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:52:58 crc kubenswrapper[4606]: I0123 16:52:58.780748 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:52:58 crc kubenswrapper[4606]: I0123 16:52:58.780761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:52:59 crc kubenswrapper[4606]: I0123 16:52:59.308575 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:48:25.694438011 +0000 UTC Jan 23 16:53:00 crc kubenswrapper[4606]: W0123 16:53:00.046831 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 16:53:00 crc kubenswrapper[4606]: I0123 16:53:00.046938 4606 trace.go:236] Trace[459764649]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:52:50.045) (total time: 10001ms): Jan 23 16:53:00 crc kubenswrapper[4606]: Trace[459764649]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:53:00.046) Jan 23 16:53:00 crc kubenswrapper[4606]: Trace[459764649]: [10.001194792s] [10.001194792s] END Jan 23 16:53:00 crc kubenswrapper[4606]: E0123 16:53:00.046962 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 16:53:00 crc kubenswrapper[4606]: I0123 16:53:00.299402 4606 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 16:53:00 crc kubenswrapper[4606]: I0123 16:53:00.309103 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:46:34.455096311 +0000 UTC Jan 23 16:53:00 crc kubenswrapper[4606]: W0123 16:53:00.696856 4606 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 16:53:00 crc kubenswrapper[4606]: I0123 16:53:00.696994 4606 trace.go:236] Trace[1132713648]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:52:50.695) (total time: 10001ms): Jan 23 16:53:00 crc kubenswrapper[4606]: Trace[1132713648]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:53:00.696) Jan 23 16:53:00 crc kubenswrapper[4606]: Trace[1132713648]: [10.001226499s] [10.001226499s] END Jan 23 16:53:00 crc kubenswrapper[4606]: E0123 16:53:00.697028 4606 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 16:53:01 crc kubenswrapper[4606]: I0123 16:53:01.325261 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:37:05.307201368 +0000 UTC Jan 23 16:53:01 crc kubenswrapper[4606]: I0123 16:53:01.858192 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 16:53:01 crc kubenswrapper[4606]: I0123 16:53:01.858285 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 16:53:01 crc kubenswrapper[4606]: I0123 16:53:01.899742 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 16:53:01 crc kubenswrapper[4606]: I0123 16:53:01.899848 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 16:53:02 crc kubenswrapper[4606]: I0123 16:53:02.326401 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:39:44.499531083 +0000 UTC Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.327033 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:02:49.163103185 +0000 UTC Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.339798 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]log ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]etcd ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/priority-and-fairness-filter ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-apiextensions-informers ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-apiextensions-controllers ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/crd-informer-synced ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-system-namespaces-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 23 16:53:03 crc kubenswrapper[4606]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/bootstrap-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/start-kube-aggregator-informers ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/apiservice-registration-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/apiservice-discovery-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]autoregister-completion ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/apiservice-openapi-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 23 16:53:03 crc kubenswrapper[4606]: livez check failed Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.339872 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.911290 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.911428 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.912359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.912388 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.912400 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:03 crc kubenswrapper[4606]: I0123 16:53:03.931122 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 16:53:04 crc kubenswrapper[4606]: I0123 16:53:04.328011 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:20:56.075995774 +0000 UTC Jan 23 16:53:04 crc kubenswrapper[4606]: I0123 16:53:04.470749 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:53:04 crc kubenswrapper[4606]: I0123 16:53:04.472276 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:04 crc kubenswrapper[4606]: I0123 16:53:04.472327 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:04 crc kubenswrapper[4606]: I0123 16:53:04.472345 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:05 crc kubenswrapper[4606]: I0123 16:53:05.328563 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:47:03.507776425 +0000 UTC Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.040569 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.040669 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.329049 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:27:24.744067977 +0000 UTC Jan 23 16:53:06 crc kubenswrapper[4606]: E0123 16:53:06.505974 4606 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 16:53:06 crc kubenswrapper[4606]: E0123 16:53:06.824129 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.826102 4606 trace.go:236] Trace[2139992354]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:52:54.765) (total time: 12060ms): Jan 23 16:53:06 crc kubenswrapper[4606]: Trace[2139992354]: ---"Objects listed" error: 12060ms (16:53:06.826) Jan 23 16:53:06 crc kubenswrapper[4606]: Trace[2139992354]: [12.060465346s] [12.060465346s] END Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.826135 4606 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.828551 4606 trace.go:236] Trace[19082950]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 16:52:54.099) (total time: 12728ms): Jan 23 16:53:06 crc kubenswrapper[4606]: Trace[19082950]: ---"Objects listed" error: 12728ms (16:53:06.828) Jan 23 16:53:06 crc kubenswrapper[4606]: Trace[19082950]: [12.728917513s] [12.728917513s] END Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.828796 4606 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.828749 4606 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 16:53:06 crc kubenswrapper[4606]: E0123 16:53:06.861807 4606 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.865859 4606 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.925088 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36972->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.925144 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36972->192.168.126.11:17697: read: connection reset by peer" Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.925243 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36980->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.925339 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36980->192.168.126.11:17697: read: connection reset by peer" Jan 23 16:53:06 crc kubenswrapper[4606]: I0123 16:53:06.979344 4606 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.329484 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:24:19.798227152 +0000 UTC Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.339980 4606 apiserver.go:52] "Watching apiserver" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.360653 4606 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.360999 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.361260 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.361970 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.362030 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.362047 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.362417 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.362868 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.362912 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.363000 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.362921 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.365259 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.365713 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.365748 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.366688 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.366862 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.369202 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.369320 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.370104 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.373700 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.401151 4606 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441120 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441172 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441199 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441218 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441239 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441259 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441279 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441301 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441321 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441339 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441359 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441383 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441400 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441420 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441440 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441459 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441479 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441502 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441522 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441545 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441565 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441602 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441624 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441645 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441662 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441681 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441701 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441723 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441743 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441766 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441787 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441805 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441826 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441851 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441875 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441898 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441928 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441949 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441972 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.441994 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442014 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442036 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442060 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442086 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442107 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442127 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442147 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442168 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442188 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442208 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442228 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442250 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442271 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442292 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442314 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442338 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442359 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442378 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442398 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442417 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442436 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442456 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442474 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442493 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442513 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442533 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442552 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442573 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442613 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442632 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442650 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442668 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442685 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442704 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442724 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442855 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442877 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442898 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442918 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442941 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442960 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.442981 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443000 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443021 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443042 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443083 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443110 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443129 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443149 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443169 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443190 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443212 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443236 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443255 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443276 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443298 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443319 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443341 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443362 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443384 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443404 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443426 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443447 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443468 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443488 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443510 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443533 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443555 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443576 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443615 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443636 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443658 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443681 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443702 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443722 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443744 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443765 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443786 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443810 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443832 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443853 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443873 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443893 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443917 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443938 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443958 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.443980 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444001 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444021 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444043 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444065 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444086 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444107 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444126 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444146 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444168 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444193 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444216 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444238 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444264 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444285 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444308 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444329 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444351 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444401 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444425 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444446 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444469 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444490 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444512 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444536 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444559 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444595 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444618 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444641 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444669 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444691 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444711 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444730 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444751 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444773 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444795 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444816 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444837 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444860 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444881 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444903 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444926 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444947 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444968 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.444990 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445013 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445036 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445061 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445094 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445116 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445139 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445161 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445183 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445205 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445228 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445253 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445278 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445300 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445323 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445345 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445367 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445392 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445414 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445437 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445460 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445484 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445507 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445531 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445554 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445617 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445649 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445677 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445701 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445726 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445754 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445778 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445810 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445835 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445856 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445880 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445904 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445929 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.445953 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.446885 4606 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.448288 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.448638 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.448663 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.448808 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.448827 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.448943 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.448983 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449096 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449145 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449279 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449307 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449387 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449408 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449450 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449531 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449563 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449747 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449763 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449873 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.449998 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450124 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450130 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450143 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450307 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450312 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450447 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450470 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450619 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450618 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450890 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450900 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.450946 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451009 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451067 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451130 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451136 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451215 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451241 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451336 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451386 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451402 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451506 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.451519 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453014 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453064 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453244 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453313 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453415 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453434 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453618 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453750 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.453897 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.454229 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.454421 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.454761 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.455125 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.455385 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.455692 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.455698 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.455709 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.455982 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.456073 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.456228 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.456239 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.456635 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.456652 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.456888 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.456909 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.457115 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.457153 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.457330 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.457392 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.457642 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.458095 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.458333 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.458526 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.458744 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.458918 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.459102 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.459548 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.459898 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.460123 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.461110 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.461326 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.461673 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.461901 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.461963 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462119 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462132 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462180 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462375 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462419 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462609 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462699 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462697 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462754 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462767 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462884 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462975 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.462994 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.463220 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.463344 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.463436 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.463557 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.463692 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.463840 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.463979 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.464019 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.464110 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.464190 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.464296 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.464445 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.464492 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.464799 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.465138 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.476678 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.457013 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.477722 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.477984 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.478166 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.481064 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.481680 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.482017 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.482245 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.482757 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.483059 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.483570 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.483865 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.484052 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.484251 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.484507 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.484764 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.485225 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.485468 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.485754 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.486179 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.486436 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.486648 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.486814 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.487060 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.491433 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.491682 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.491706 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.491806 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.491889 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.492093 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.492282 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.492361 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.492381 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.492557 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.492691 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.492797 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.493682 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.493771 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.493911 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.494210 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.494321 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.494332 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.494549 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.495021 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.495050 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.495526 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.496172 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.498346 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.498870 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.499050 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:07.999006348 +0000 UTC m=+21.999198834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.509110 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:53:08.009082425 +0000 UTC m=+22.009274901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.509267 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.509468 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.509597 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.509773 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.510058 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.510072 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.510235 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:08.010216858 +0000 UTC m=+22.010409264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.510426 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.510866 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.511200 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.511260 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.511437 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.511542 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.511607 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.512101 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.512221 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.512572 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.512833 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.512947 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.513135 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.513166 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.513608 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.513847 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.513999 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.514041 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.514347 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.514442 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.514452 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.518036 4606 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020" exitCode=255 Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.518094 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020"} Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.525454 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.525777 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.525982 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.526025 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.542293 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549793 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549854 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549912 4606 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549925 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549938 4606 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549949 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549960 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549971 4606 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.549984 4606 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550016 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550028 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550038 4606 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550049 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550062 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550075 4606 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550090 4606 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550103 4606 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550114 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550126 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550135 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550145 4606 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550156 4606 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550166 4606 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550178 4606 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550188 4606 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550199 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550209 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550222 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550232 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550237 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550243 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550276 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550287 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550298 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550309 4606 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550320 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550331 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550341 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550352 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550363 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550373 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550383 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550396 4606 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550409 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550420 4606 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550431 4606 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550445 4606 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550456 4606 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550469 4606 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550479 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550490 4606 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550500 4606 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550510 4606 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550520 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550530 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550539 4606 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550549 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550563 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550574 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550602 4606 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550614 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550624 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550635 4606 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550645 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550656 4606 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550667 4606 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550749 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550806 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550829 4606 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550843 4606 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550855 4606 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550871 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550895 4606 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550907 4606 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550919 4606 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550932 4606 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550943 4606 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550955 4606 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550968 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550980 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.550993 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551006 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551018 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551030 4606 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551042 4606 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551053 4606 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551065 4606 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551077 4606 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551088 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551099 4606 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551110 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551124 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551134 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551146 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551159 4606 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551170 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551181 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551193 4606 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551204 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551216 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551227 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551241 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551253 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551251 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551265 4606 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551278 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551291 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551303 4606 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551314 4606 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551325 4606 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551336 4606 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551350 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551361 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551373 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551386 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551397 4606 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551408 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551419 4606 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551430 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551442 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551453 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551464 4606 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551475 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551486 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551498 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551509 4606 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551520 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551530 4606 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551540 4606 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551550 4606 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551561 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551571 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551602 4606 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551615 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551628 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551640 4606 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551650 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551660 4606 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551672 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551683 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551694 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551705 4606 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551716 4606 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551727 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551739 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551751 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551762 4606 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551773 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551790 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551801 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551815 4606 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551829 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551841 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551852 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551864 4606 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551875 4606 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551887 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551900 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551911 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551924 4606 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551936 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551948 4606 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551960 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551971 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551983 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.551994 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552005 4606 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552015 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552025 4606 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552036 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552047 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552058 4606 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552069 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552080 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552090 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552100 4606 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552112 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552125 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552136 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552147 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552157 4606 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552168 4606 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552178 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552189 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552200 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552211 4606 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552223 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552234 4606 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552245 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552258 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552269 4606 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552280 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552291 4606 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.552853 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.559863 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.560148 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.574852 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.574884 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.574896 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.574952 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:08.074935827 +0000 UTC m=+22.075128233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.584766 4606 csr.go:261] certificate signing request csr-jfszq is approved, waiting to be issued Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.584995 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.585008 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.585019 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:07 crc kubenswrapper[4606]: E0123 16:53:07.585066 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:08.085048724 +0000 UTC m=+22.085241130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.586730 4606 scope.go:117] "RemoveContainer" containerID="5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.588444 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.588620 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.595895 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.596670 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.646448 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.646722 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.655128 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.655160 4606 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.655169 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.655177 4606 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.655186 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.660827 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.676553 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.676712 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.690543 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.690621 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.700765 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.718654 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: W0123 16:53:07.744784 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b3caa4b6b45c9371ec83113dabd21b8674a00728f268d6d0c803e708824c64a2 WatchSource:0}: Error finding container b3caa4b6b45c9371ec83113dabd21b8674a00728f268d6d0c803e708824c64a2: Status 404 returned error can't find the container with id b3caa4b6b45c9371ec83113dabd21b8674a00728f268d6d0c803e708824c64a2 Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.749499 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.790067 4606 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.795489 4606 csr.go:257] certificate signing request csr-jfszq is issued Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.884777 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.897142 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.920917 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:07 crc kubenswrapper[4606]: I0123 16:53:07.937472 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.002189 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.053625 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.089078 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.102243 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.102330 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.102356 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.102382 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.102410 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102612 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:53:09.102576384 +0000 UTC m=+23.102768790 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102616 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102645 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102660 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102674 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102688 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102698 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102727 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:09.102712307 +0000 UTC m=+23.102904713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102744 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:09.102736978 +0000 UTC m=+23.102929384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102749 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102778 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:09.102769088 +0000 UTC m=+23.102961494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102824 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: E0123 16:53:08.102850 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:09.10284261 +0000 UTC m=+23.103035016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.169813 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.329868 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:14:39.75642567 +0000 UTC Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.378369 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.378871 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.380081 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.380792 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.381681 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.382132 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.382670 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.383720 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.384296 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.385758 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.386204 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.387301 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.387756 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.388241 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.389098 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.389553 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.390988 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.391340 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.392295 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.392833 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.393410 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.394366 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.394827 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.396364 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.397268 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.398012 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.399053 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.399702 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.400989 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.401477 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.402188 4606 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.402275 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.403823 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.404630 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.404991 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.406396 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.407211 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.407678 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.408631 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.409224 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.409985 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.410517 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.411416 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.412192 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.412622 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.413436 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.413956 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.415291 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.415753 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.419537 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.420019 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.420617 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.421543 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.421987 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.422844 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.496323 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.514742 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.533999 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931"} Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.534265 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6"} Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.534369 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8a461866a98aa1ca0cc1ed94b401742590f35fc773fe6aaa90e42c28854bdefc"} Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.534886 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b3caa4b6b45c9371ec83113dabd21b8674a00728f268d6d0c803e708824c64a2"} Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.536107 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164"} Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.536223 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a4b4c072ba1030d7392241bf1a5a115e08a0336268df7c2cbec31e20ee55a1e7"} Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.538236 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.539568 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39"} Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.540180 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.567698 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.569217 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.626538 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.690123 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.753207 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.774936 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.797052 4606 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 16:48:07 +0000 UTC, rotation deadline is 2026-11-17 15:04:35.479878895 +0000 UTC Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.797103 4606 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7150h11m26.682779394s for next certificate rotation Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.815542 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.840261 4606 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.843119 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.859740 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.874088 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.901642 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.921779 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:08 crc kubenswrapper[4606]: I0123 16:53:08.947721 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.112749 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.112883 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.112918 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.112954 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.112979 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.113096 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.113181 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:11.113157139 +0000 UTC m=+25.113349545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.113777 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:53:11.113763832 +0000 UTC m=+25.113956238 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.113877 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.113915 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:11.113906965 +0000 UTC m=+25.114099371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.113997 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.114019 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.114034 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.114064 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:11.114056149 +0000 UTC m=+25.114248555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.114118 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.114132 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.114142 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.114168 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:11.114159881 +0000 UTC m=+25.114352287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.330312 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:51:42.000610907 +0000 UTC Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.365188 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.365522 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.365347 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.365785 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.365303 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.365944 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.432362 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-m9sb7"] Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.432713 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.436782 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.437239 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p4c8f"] Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.437526 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.438032 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.438199 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 16:53:09 crc kubenswrapper[4606]: W0123 16:53:09.448769 4606 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.448806 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:53:09 crc kubenswrapper[4606]: W0123 16:53:09.448830 4606 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 23 16:53:09 crc kubenswrapper[4606]: W0123 16:53:09.448849 4606 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.448861 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:53:09 crc kubenswrapper[4606]: E0123 16:53:09.448864 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.451269 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.451413 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.516351 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ea72e8-0104-491a-9ccc-49199db0d3d9-mcd-auth-proxy-config\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.516403 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlv2\" (UniqueName: \"kubernetes.io/projected/8a063622-dd68-4217-89ac-f220683a1b73-kube-api-access-tzlv2\") pod \"node-resolver-p4c8f\" (UID: \"8a063622-dd68-4217-89ac-f220683a1b73\") " pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.516441 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8ea72e8-0104-491a-9ccc-49199db0d3d9-proxy-tls\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.516463 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvdz7\" (UniqueName: \"kubernetes.io/projected/a8ea72e8-0104-491a-9ccc-49199db0d3d9-kube-api-access-xvdz7\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.516482 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8ea72e8-0104-491a-9ccc-49199db0d3d9-rootfs\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.516498 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8a063622-dd68-4217-89ac-f220683a1b73-hosts-file\") pod \"node-resolver-p4c8f\" (UID: \"8a063622-dd68-4217-89ac-f220683a1b73\") " pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.531102 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.553771 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.574812 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.602784 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.617713 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8ea72e8-0104-491a-9ccc-49199db0d3d9-proxy-tls\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.618051 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvdz7\" (UniqueName: \"kubernetes.io/projected/a8ea72e8-0104-491a-9ccc-49199db0d3d9-kube-api-access-xvdz7\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.618174 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8ea72e8-0104-491a-9ccc-49199db0d3d9-rootfs\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.618324 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8a063622-dd68-4217-89ac-f220683a1b73-hosts-file\") pod \"node-resolver-p4c8f\" (UID: \"8a063622-dd68-4217-89ac-f220683a1b73\") " pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.618324 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8ea72e8-0104-491a-9ccc-49199db0d3d9-rootfs\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.618375 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8a063622-dd68-4217-89ac-f220683a1b73-hosts-file\") pod \"node-resolver-p4c8f\" (UID: \"8a063622-dd68-4217-89ac-f220683a1b73\") " pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.618671 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ea72e8-0104-491a-9ccc-49199db0d3d9-mcd-auth-proxy-config\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.619126 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzlv2\" (UniqueName: \"kubernetes.io/projected/8a063622-dd68-4217-89ac-f220683a1b73-kube-api-access-tzlv2\") pod \"node-resolver-p4c8f\" (UID: \"8a063622-dd68-4217-89ac-f220683a1b73\") " pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.619167 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.619504 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8ea72e8-0104-491a-9ccc-49199db0d3d9-mcd-auth-proxy-config\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.621686 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8ea72e8-0104-491a-9ccc-49199db0d3d9-proxy-tls\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.637051 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.648107 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvdz7\" (UniqueName: \"kubernetes.io/projected/a8ea72e8-0104-491a-9ccc-49199db0d3d9-kube-api-access-xvdz7\") pod \"machine-config-daemon-m9sb7\" (UID: \"a8ea72e8-0104-491a-9ccc-49199db0d3d9\") " pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.650380 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.677187 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.690930 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.705402 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.720516 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.737687 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.763533 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.763608 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:53:09 crc kubenswrapper[4606]: W0123 16:53:09.782005 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ea72e8_0104_491a_9ccc_49199db0d3d9.slice/crio-e1f19267d61add80cbe914d1877550adc66e48e7d10ffdb96ca87998f267fb46 WatchSource:0}: Error finding container e1f19267d61add80cbe914d1877550adc66e48e7d10ffdb96ca87998f267fb46: Status 404 returned error can't find the container with id e1f19267d61add80cbe914d1877550adc66e48e7d10ffdb96ca87998f267fb46 Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.793241 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.814948 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.831455 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.842841 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.895315 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rjtjv"] Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.895870 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.896068 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ftp8l"] Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.896419 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j9tqd"] Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.901827 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.902379 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.902475 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.903377 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.903897 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.905605 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.907982 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.908071 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.908125 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.913964 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.914143 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.914245 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.914374 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.914607 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.915093 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.916834 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920423 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-systemd-units\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920451 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-cni-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920471 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-var-lib-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920488 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-os-release\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920503 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-cni-bin\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920521 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-netns\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920536 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-script-lib\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920603 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnghd\" (UniqueName: \"kubernetes.io/projected/a0f37a36-f502-425c-a9e5-035bf475742b-kube-api-access-wnghd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920650 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920680 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-system-cni-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920707 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920729 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-system-cni-dir\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920755 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-ovn\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920776 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-env-overrides\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920792 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgnx\" (UniqueName: \"kubernetes.io/projected/2692a882-403f-4842-b756-53ed7a38fa7f-kube-api-access-llgnx\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920815 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-netns\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920834 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-kubelet\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920854 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-conf-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920875 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-config\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920892 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-multus-certs\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920907 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-node-log\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920924 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920954 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2692a882-403f-4842-b756-53ed7a38fa7f-cni-binary-copy\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920978 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-slash\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.920992 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-systemd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921005 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-bin\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921022 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-cni-multus\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921055 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-socket-dir-parent\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921069 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-kubelet\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921083 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-netd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921097 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-os-release\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921112 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921126 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-hostroot\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921140 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-etc-kubernetes\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921158 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cnibin\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921171 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f37a36-f502-425c-a9e5-035bf475742b-ovn-node-metrics-cert\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921185 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921205 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r67jz\" (UniqueName: \"kubernetes.io/projected/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-kube-api-access-r67jz\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921225 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-k8s-cni-cncf-io\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921267 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-cnibin\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921292 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2692a882-403f-4842-b756-53ed7a38fa7f-multus-daemon-config\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921314 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-log-socket\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921337 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-etc-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.921357 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:09 crc kubenswrapper[4606]: I0123 16:53:09.941124 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022488 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-netd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022535 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-os-release\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022560 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022581 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-hostroot\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022627 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-etc-kubernetes\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022643 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cnibin\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022658 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022678 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r67jz\" (UniqueName: \"kubernetes.io/projected/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-kube-api-access-r67jz\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022697 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f37a36-f502-425c-a9e5-035bf475742b-ovn-node-metrics-cert\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022729 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-cnibin\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022742 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-k8s-cni-cncf-io\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022756 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-log-socket\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022769 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2692a882-403f-4842-b756-53ed7a38fa7f-multus-daemon-config\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022784 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022799 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-etc-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022815 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-systemd-units\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022831 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-cni-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022847 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-var-lib-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022862 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-os-release\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022876 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-cni-bin\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022892 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-netns\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022909 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-script-lib\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022925 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnghd\" (UniqueName: \"kubernetes.io/projected/a0f37a36-f502-425c-a9e5-035bf475742b-kube-api-access-wnghd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022939 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022954 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-system-cni-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022969 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.022984 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-system-cni-dir\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023001 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-ovn\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023015 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-env-overrides\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023032 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-netns\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023047 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-kubelet\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023062 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-conf-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023076 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgnx\" (UniqueName: \"kubernetes.io/projected/2692a882-403f-4842-b756-53ed7a38fa7f-kube-api-access-llgnx\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023093 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-config\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023107 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-multus-certs\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023121 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-node-log\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023136 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023150 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2692a882-403f-4842-b756-53ed7a38fa7f-cni-binary-copy\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023163 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-slash\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023177 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-systemd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023190 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-bin\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023224 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-socket-dir-parent\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023316 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-cni-multus\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023331 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-kubelet\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023392 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-kubelet\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023427 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-netd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.023816 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-os-release\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024365 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024399 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-hostroot\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024420 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-etc-kubernetes\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024443 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cnibin\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024802 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-system-cni-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024908 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-node-log\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024888 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-multus-certs\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024953 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-ovn-kubernetes\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025423 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-config\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025494 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025513 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2692a882-403f-4842-b756-53ed7a38fa7f-cni-binary-copy\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025525 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-system-cni-dir\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025552 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-slash\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025555 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-ovn\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025578 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-systemd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025626 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-bin\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025662 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-socket-dir-parent\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025684 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-cni-multus\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025776 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-cni-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025809 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-cnibin\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025831 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-k8s-cni-cncf-io\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025852 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-log-socket\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025895 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-env-overrides\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025949 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-run-netns\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.025981 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-kubelet\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026003 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-multus-conf-dir\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026245 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-etc-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026245 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2692a882-403f-4842-b756-53ed7a38fa7f-multus-daemon-config\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026268 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-systemd-units\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026293 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026300 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-netns\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026328 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-host-var-lib-cni-bin\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026343 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2692a882-403f-4842-b756-53ed7a38fa7f-os-release\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026367 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-var-lib-openvswitch\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.026683 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.024864 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.027031 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-script-lib\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.041874 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.051081 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f37a36-f502-425c-a9e5-035bf475742b-ovn-node-metrics-cert\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.057624 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnghd\" (UniqueName: \"kubernetes.io/projected/a0f37a36-f502-425c-a9e5-035bf475742b-kube-api-access-wnghd\") pod \"ovnkube-node-j9tqd\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.060225 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r67jz\" (UniqueName: \"kubernetes.io/projected/b6a94b12-97b4-4dc5-aeea-7ad7d33df65b-kube-api-access-r67jz\") pod \"multus-additional-cni-plugins-rjtjv\" (UID: \"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\") " pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.062455 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgnx\" (UniqueName: \"kubernetes.io/projected/2692a882-403f-4842-b756-53ed7a38fa7f-kube-api-access-llgnx\") pod \"multus-ftp8l\" (UID: \"2692a882-403f-4842-b756-53ed7a38fa7f\") " pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.065219 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.097403 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.136619 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.150684 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.171868 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.213440 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.235466 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.238007 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ftp8l" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.238008 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.245969 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:10 crc kubenswrapper[4606]: W0123 16:53:10.254448 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6a94b12_97b4_4dc5_aeea_7ad7d33df65b.slice/crio-1bbc9eecd8e7cd04bbc8ee9cb2ae1608181fe1fd742e31f885240c3a3060ea54 WatchSource:0}: Error finding container 1bbc9eecd8e7cd04bbc8ee9cb2ae1608181fe1fd742e31f885240c3a3060ea54: Status 404 returned error can't find the container with id 1bbc9eecd8e7cd04bbc8ee9cb2ae1608181fe1fd742e31f885240c3a3060ea54 Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.267230 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.292743 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.308422 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.331500 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:17:22.68618714 +0000 UTC Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.341425 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.359433 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.378453 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.379347 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.402257 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.419953 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.454577 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.478802 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.492060 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.499909 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.505622 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.510946 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzlv2\" (UniqueName: \"kubernetes.io/projected/8a063622-dd68-4217-89ac-f220683a1b73-kube-api-access-tzlv2\") pod \"node-resolver-p4c8f\" (UID: \"8a063622-dd68-4217-89ac-f220683a1b73\") " pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.520198 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.545720 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.545770 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.545785 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"e1f19267d61add80cbe914d1877550adc66e48e7d10ffdb96ca87998f267fb46"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.547564 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6" exitCode=0 Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.547651 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.547672 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"5fbd4cd45169e64e1e8be6e728ae32c008d68bfcb32fe8a3df6fb7cacaa71497"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.550461 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerStarted","Data":"7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.550503 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerStarted","Data":"d0cd1229f8fd35616fabeb284a17a4d7de4b5f57ef62b40cc4a45ae1f3ca116a"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.552099 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerStarted","Data":"1bbc9eecd8e7cd04bbc8ee9cb2ae1608181fe1fd742e31f885240c3a3060ea54"} Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.563762 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.588721 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.614773 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.633479 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.648928 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.662834 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.691710 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.720096 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.729581 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.730624 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p4c8f" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.743092 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.762799 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.800470 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.826244 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:10 crc kubenswrapper[4606]: I0123 16:53:10.879854 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:10.970648 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.004795 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.020794 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.049882 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.121634 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.139809 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.187876 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.188009 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.188036 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.188063 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.188090 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188222 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188243 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188255 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188301 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:15.188287407 +0000 UTC m=+29.188479813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188687 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188707 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188720 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188753 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:15.188740228 +0000 UTC m=+29.188932634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188769 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188958 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:53:15.188946902 +0000 UTC m=+29.189139308 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.188984 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.189013 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:15.189003173 +0000 UTC m=+29.189195579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.189043 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:15.189037034 +0000 UTC m=+29.189229440 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.194663 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.244313 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.299097 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.320539 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.342534 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:29:38.057696704 +0000 UTC Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.357447 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.389171 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.389209 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.389177 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.389310 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.389379 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:11 crc kubenswrapper[4606]: E0123 16:53:11.389439 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.573111 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerStarted","Data":"dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab"} Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.577457 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.577510 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.577524 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.577542 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.579203 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p4c8f" event={"ID":"8a063622-dd68-4217-89ac-f220683a1b73","Type":"ContainerStarted","Data":"4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7"} Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.579245 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p4c8f" event={"ID":"8a063622-dd68-4217-89ac-f220683a1b73","Type":"ContainerStarted","Data":"96352fd3505a78c20711c9e93da19c4d0ce35bc38e9ce0d64e8a37a6d4d53cc5"} Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.590097 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.662368 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.778272 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.862790 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:11 crc kubenswrapper[4606]: I0123 16:53:11.968220 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.021762 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.104403 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.163450 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.210580 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.247619 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.270461 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.292348 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.308494 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.321035 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.331730 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.342687 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:25:35.885202458 +0000 UTC Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.348272 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.365734 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.404253 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.423043 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.441103 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.470513 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.490044 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.501997 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.514050 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.585483 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.585531 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.586705 4606 generic.go:334] "Generic (PLEG): container finished" podID="b6a94b12-97b4-4dc5-aeea-7ad7d33df65b" containerID="dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab" exitCode=0 Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.586811 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerDied","Data":"dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab"} Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.603839 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.624812 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.638629 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.649658 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.664229 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.675455 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.687713 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.703888 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.715999 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.739196 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.753194 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:12 crc kubenswrapper[4606]: I0123 16:53:12.767999 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.044676 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.051744 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.055802 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.062264 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.075748 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.088530 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.105368 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.116179 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.127179 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.148732 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.160085 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.168806 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.185434 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.203465 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.231959 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.246651 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.262099 4606 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.262238 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.263768 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.263798 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.263810 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.263908 4606 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.270355 4606 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.270669 4606 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.273120 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.279651 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.279689 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.279700 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.279716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.279726 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.285716 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.298382 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.299625 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.309024 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.309055 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.309064 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.309077 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.309086 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.311841 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.323019 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.323930 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.325952 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.325968 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.325976 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.325988 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.325997 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.343775 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:52:36.799084207 +0000 UTC Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.355201 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.355551 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.359139 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.359171 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.359180 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.359192 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.359201 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.365212 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.365237 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.365257 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.365325 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.365392 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.365505 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.367695 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.372828 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.376852 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.376932 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.376944 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.376966 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.376977 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.393065 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.393197 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.395026 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.395058 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.395086 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.395103 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.395115 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.396857 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.408366 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.420253 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.432666 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.497746 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.497783 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.497793 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.497808 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.497818 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.591384 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerStarted","Data":"9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.600250 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.600301 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.600310 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.600324 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.600332 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: E0123 16:53:13.601265 4606 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.606714 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.622958 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.635417 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.649159 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.678444 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.688836 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.700243 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.716465 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.716518 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.716530 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.716546 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.716558 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.723874 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.818653 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.818706 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.818722 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.818746 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.818764 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.821868 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.848154 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.920468 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.920505 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.920528 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.920542 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.920552 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:13Z","lastTransitionTime":"2026-01-23T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.934898 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:13 crc kubenswrapper[4606]: I0123 16:53:13.975114 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.000144 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.022631 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.022660 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.022670 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.022686 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.022695 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.124817 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.124869 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.124883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.124899 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.124910 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.216392 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wc86k"] Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.216800 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.219108 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.220571 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.220645 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.220753 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.228041 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.228082 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.228093 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.228108 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.228118 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.232969 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.244938 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.255556 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.267000 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa611a0a-e12f-403c-8cd6-666972207315-host\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.267091 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlsp\" (UniqueName: \"kubernetes.io/projected/aa611a0a-e12f-403c-8cd6-666972207315-kube-api-access-9xlsp\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.267115 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aa611a0a-e12f-403c-8cd6-666972207315-serviceca\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.279437 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.294299 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.305014 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.329942 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.329966 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.329974 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.329987 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.329995 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.330341 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.344277 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:45:01.239303243 +0000 UTC Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.351178 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.367949 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa611a0a-e12f-403c-8cd6-666972207315-host\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.368482 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlsp\" (UniqueName: \"kubernetes.io/projected/aa611a0a-e12f-403c-8cd6-666972207315-kube-api-access-9xlsp\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.368632 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aa611a0a-e12f-403c-8cd6-666972207315-serviceca\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.368142 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aa611a0a-e12f-403c-8cd6-666972207315-host\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.370023 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aa611a0a-e12f-403c-8cd6-666972207315-serviceca\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.374651 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.391578 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.393966 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlsp\" (UniqueName: \"kubernetes.io/projected/aa611a0a-e12f-403c-8cd6-666972207315-kube-api-access-9xlsp\") pod \"node-ca-wc86k\" (UID: \"aa611a0a-e12f-403c-8cd6-666972207315\") " pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.406835 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.417858 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.432570 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.432643 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.432653 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.432668 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.432679 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.434166 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.450120 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.530683 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wc86k" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.535292 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.535324 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.535357 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.535370 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.535379 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.609883 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.612943 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.614778 4606 generic.go:334] "Generic (PLEG): container finished" podID="b6a94b12-97b4-4dc5-aeea-7ad7d33df65b" containerID="9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0" exitCode=0 Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.614865 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerDied","Data":"9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.618154 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wc86k" event={"ID":"aa611a0a-e12f-403c-8cd6-666972207315","Type":"ContainerStarted","Data":"2f7306f21a3e44eb33da22fd1c752c578928cbc23ee0c3d037173dae317ce095"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.626003 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.639395 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.639438 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.639459 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.639475 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.639486 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.642603 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.652813 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.663782 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.679446 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.689079 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.701158 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.714021 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.728857 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.742881 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.743140 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.743179 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.743189 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.743205 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.743215 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.757498 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.772961 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.791899 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.831337 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.845576 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.845650 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.845660 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.845675 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.845685 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.865034 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.903440 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.940132 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.948434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.948474 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.948485 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.948503 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.948514 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:14Z","lastTransitionTime":"2026-01-23T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:14 crc kubenswrapper[4606]: I0123 16:53:14.980446 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:14Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.022642 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.050495 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.050548 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.050565 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.050627 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.050658 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.064194 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.102367 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.143624 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.153403 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.153459 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.153471 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.153494 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.153506 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.185463 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.223595 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.255798 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.255858 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.255870 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.255890 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.255902 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.277367 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.279024 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.279109 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.279139 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.279165 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.279197 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279224 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279290 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:23.279271702 +0000 UTC m=+37.279464178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279307 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:53:23.279297963 +0000 UTC m=+37.279490369 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279306 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279325 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279351 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279364 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279375 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279404 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279423 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279390 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:23.279370154 +0000 UTC m=+37.279562610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279494 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:23.279475747 +0000 UTC m=+37.279668193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.279518 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:23.279504197 +0000 UTC m=+37.279696633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.307726 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.344213 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.344371 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:56:06.488498859 +0000 UTC Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.358152 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.358500 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.358614 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.358699 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.358769 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.365692 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.365758 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.366220 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.365755 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.366544 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:15 crc kubenswrapper[4606]: E0123 16:53:15.366422 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.384022 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.461652 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.461692 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.461702 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.461715 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.461725 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.563920 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.564196 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.564283 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.564369 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.564460 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.624012 4606 generic.go:334] "Generic (PLEG): container finished" podID="b6a94b12-97b4-4dc5-aeea-7ad7d33df65b" containerID="51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402" exitCode=0 Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.624106 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerDied","Data":"51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.626382 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wc86k" event={"ID":"aa611a0a-e12f-403c-8cd6-666972207315","Type":"ContainerStarted","Data":"ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.639424 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.649788 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.660032 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.668615 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.668652 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.668664 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.668681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.668693 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.676351 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.691927 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.702990 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.718744 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.731346 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.744806 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.771672 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.771708 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.771719 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.771734 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.771744 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.784961 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.822631 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.862121 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.874063 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.874100 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.874114 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.874129 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.874139 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.905366 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.951759 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.977035 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.977080 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.977094 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.977111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.977125 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:15Z","lastTransitionTime":"2026-01-23T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:15 crc kubenswrapper[4606]: I0123 16:53:15.991737 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.054194 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.063949 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.079874 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.079904 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.079913 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.079927 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.079936 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.104437 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.143067 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.181131 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.182274 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.182303 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.182313 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.182327 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.182339 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.212074 4606 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.262769 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.278386 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.284372 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.284409 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.284421 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.284438 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.284449 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.305902 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.346025 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:49:15.941054301 +0000 UTC Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.346407 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.386922 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.386951 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.386979 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.386998 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.387009 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.386977 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.432322 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.464670 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.489067 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.489300 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.489411 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.489524 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.489803 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.513426 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.548945 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: E0123 16:53:16.556497 4606 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6a94b12_97b4_4dc5_aeea_7ad7d33df65b.slice/crio-conmon-46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b.scope\": RecentStats: unable to find data in memory cache]" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.584257 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.598954 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.598988 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.599000 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.599018 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.599030 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.626685 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.630827 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.631545 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.631639 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.631665 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.634638 4606 generic.go:334] "Generic (PLEG): container finished" podID="b6a94b12-97b4-4dc5-aeea-7ad7d33df65b" containerID="46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b" exitCode=0 Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.636193 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerDied","Data":"46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.675727 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.689552 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.691936 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.701068 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.701110 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.701122 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.701137 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.701444 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.707018 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.744299 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.787542 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.804326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.804357 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.804368 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.804383 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.804415 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.822708 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.870540 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.907286 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.907315 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.907326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.907341 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.907351 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:16Z","lastTransitionTime":"2026-01-23T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.910853 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.956384 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:16 crc kubenswrapper[4606]: I0123 16:53:16.991847 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.009705 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.009731 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.009739 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.009766 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.009775 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.026660 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.063705 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.103122 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.111501 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.111547 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.111577 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.111610 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.111630 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.143636 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.186567 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.214176 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.214212 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.214231 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.214251 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.214266 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.228431 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.265529 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.305256 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.316515 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.316557 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.316572 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.316614 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.316632 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.344048 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.347115 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:45:43.206936 +0000 UTC Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.365253 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.365304 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.365273 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:17 crc kubenswrapper[4606]: E0123 16:53:17.365406 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:17 crc kubenswrapper[4606]: E0123 16:53:17.365522 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:17 crc kubenswrapper[4606]: E0123 16:53:17.365678 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.408986 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.419012 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.419034 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.419042 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.419054 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.419062 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.425260 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.463799 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.501492 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.520945 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.521173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.521245 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.521309 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.521365 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.545482 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.587107 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.621724 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.630275 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.630302 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.630310 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.630323 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.630331 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.644024 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerStarted","Data":"a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.665692 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.704937 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.732193 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.732295 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.732313 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.732371 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.732399 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.747050 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.786697 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.824639 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.837342 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.837377 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.837392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.837410 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.837422 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.869261 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.903741 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.939420 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.939454 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.939464 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.939477 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.939503 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:17Z","lastTransitionTime":"2026-01-23T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.947934 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:17 crc kubenswrapper[4606]: I0123 16:53:17.983225 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.021774 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.041837 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.042050 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.042171 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.042257 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.042328 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.063541 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.111955 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.140882 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.144519 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.144694 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.144866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.145008 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.145147 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.184198 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.252828 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.253016 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.253099 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.253167 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.253221 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.348002 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:37:35.271159481 +0000 UTC Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.356419 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.356481 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.356499 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.356526 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.356546 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.459314 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.459362 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.459379 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.459401 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.459419 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.562385 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.562639 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.562713 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.562782 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.562877 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.665415 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.666080 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.666185 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.666269 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.666343 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.768792 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.769111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.769219 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.769323 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.769417 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.872040 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.872303 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.872378 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.872446 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.872514 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.975260 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.975285 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.975296 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.975311 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:18 crc kubenswrapper[4606]: I0123 16:53:18.975323 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:18Z","lastTransitionTime":"2026-01-23T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.077699 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.077752 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.077761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.077774 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.077784 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.180093 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.180123 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.180131 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.180143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.180153 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.283016 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.283065 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.283075 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.283089 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.283098 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.349257 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:31:31.674389548 +0000 UTC Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.364716 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.364807 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.365225 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:19 crc kubenswrapper[4606]: E0123 16:53:19.365710 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:19 crc kubenswrapper[4606]: E0123 16:53:19.366233 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:19 crc kubenswrapper[4606]: E0123 16:53:19.366334 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.386038 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.386101 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.386111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.386134 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.386154 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.490236 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.490263 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.490271 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.490284 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.490293 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.592855 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.592895 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.592904 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.592921 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.592932 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.694838 4606 generic.go:334] "Generic (PLEG): container finished" podID="b6a94b12-97b4-4dc5-aeea-7ad7d33df65b" containerID="a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458" exitCode=0 Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.694925 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.694961 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.694938 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerDied","Data":"a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.694971 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.695434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.695508 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.730480 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.749004 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.776878 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.798339 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.798375 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.798386 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.798407 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.798418 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.804459 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.828133 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.861624 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.878677 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.897130 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.902281 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.902553 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.902569 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.902602 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.902612 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:19Z","lastTransitionTime":"2026-01-23T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.914523 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.933477 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.944127 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.959274 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:19 crc kubenswrapper[4606]: I0123 16:53:19.980688 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:19Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.006052 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.006084 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.006094 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.006144 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.006162 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.099891 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.108575 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.108629 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.108638 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.108652 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.108661 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.213301 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.213379 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.213395 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.213421 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.213441 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.316387 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.316453 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.316470 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.316492 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.316506 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.349600 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:48:23.894848422 +0000 UTC Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.419066 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.419105 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.419112 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.419125 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.419134 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.521230 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.521263 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.521270 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.521283 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.521295 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.624110 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.624164 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.624175 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.624196 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.624219 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.704066 4606 generic.go:334] "Generic (PLEG): container finished" podID="b6a94b12-97b4-4dc5-aeea-7ad7d33df65b" containerID="134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb" exitCode=0 Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.704150 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerDied","Data":"134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.730088 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.734099 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.734139 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.734149 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.734166 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.734180 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.751582 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.767170 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.778958 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.798522 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.814201 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.829478 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.836880 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.836916 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.836925 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.836940 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.836950 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.847330 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.859700 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.872777 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.885224 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.898397 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.914728 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.932020 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:20Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.939283 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.939317 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.939326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.939345 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:20 crc kubenswrapper[4606]: I0123 16:53:20.939353 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:20Z","lastTransitionTime":"2026-01-23T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.042340 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.042391 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.042403 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.042425 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.042438 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.145029 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.145074 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.145084 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.145105 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.145116 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.249539 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.250293 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.250328 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.250348 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.250359 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.349788 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:37:27.520832841 +0000 UTC Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.353052 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.353106 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.353119 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.353140 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.353150 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.365472 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.365578 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.365474 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:21 crc kubenswrapper[4606]: E0123 16:53:21.365656 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:21 crc kubenswrapper[4606]: E0123 16:53:21.365737 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:21 crc kubenswrapper[4606]: E0123 16:53:21.365893 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.456473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.456502 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.456512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.456527 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.456540 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.559779 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.559834 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.559848 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.559868 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.559880 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.663772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.663842 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.663856 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.663884 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.663901 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.715316 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" event={"ID":"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b","Type":"ContainerStarted","Data":"01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.719179 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/0.log" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.724132 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a" exitCode=1 Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.724175 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.725492 4606 scope.go:117] "RemoveContainer" containerID="f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.735112 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.751348 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.764445 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.766643 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.766697 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.766709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.766727 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.766746 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.783095 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.800393 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.814897 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.834096 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.856231 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.868798 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.869643 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.869687 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.869699 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.869716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.869728 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.886165 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.898651 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.912358 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.930122 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.944198 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.957693 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.970730 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.972077 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.972200 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.972292 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.972379 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.972485 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:21Z","lastTransitionTime":"2026-01-23T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:21 crc kubenswrapper[4606]: I0123 16:53:21.987452 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:21Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.002378 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.017714 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.034388 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.050702 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.076016 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.076074 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.076086 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.076104 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.076119 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.080941 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:20.799694 5739 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:53:20.799749 5739 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 16:53:20.799756 5739 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 16:53:20.799785 5739 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:53:20.799832 5739 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 16:53:20.799841 5739 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 16:53:20.799842 5739 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:20.799850 5739 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 16:53:20.799862 5739 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:53:20.799866 5739 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:53:20.799873 5739 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:53:20.799886 5739 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 16:53:20.799912 5739 factory.go:656] Stopping watch factory\\\\nI0123 16:53:20.799929 5739 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:53:20.799957 5739 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.098229 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.113493 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.124699 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.143543 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.170721 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.178874 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.179105 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.179170 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.179233 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.179294 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.187648 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.282250 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.282312 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.282326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.282346 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.282359 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.350872 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:35:10.422096736 +0000 UTC Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.385863 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.386324 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.386414 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.386574 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.386704 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.489829 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.489887 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.489908 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.489934 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.489951 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.593127 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.593173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.593189 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.593209 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.593224 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.697392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.697456 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.697475 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.697497 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.697511 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.783369 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz"] Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.785148 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.792493 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.792939 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.801958 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.802021 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.802039 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.802064 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.802081 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.808264 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.825372 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.837764 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.851775 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.868151 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.884565 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.899966 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.904350 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.904397 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.904434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.904450 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.904460 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:22Z","lastTransitionTime":"2026-01-23T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.910013 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cb934e2-af1b-473c-ae3a-061098141a23-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.910064 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xsh\" (UniqueName: \"kubernetes.io/projected/6cb934e2-af1b-473c-ae3a-061098141a23-kube-api-access-l8xsh\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.910095 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cb934e2-af1b-473c-ae3a-061098141a23-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.910155 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cb934e2-af1b-473c-ae3a-061098141a23-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.916134 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.936655 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.949894 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.967784 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:22 crc kubenswrapper[4606]: I0123 16:53:22.987899 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.004503 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.007158 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.007212 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.007222 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.007244 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.007258 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.011519 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cb934e2-af1b-473c-ae3a-061098141a23-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.011578 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cb934e2-af1b-473c-ae3a-061098141a23-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.011634 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cb934e2-af1b-473c-ae3a-061098141a23-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.011659 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xsh\" (UniqueName: \"kubernetes.io/projected/6cb934e2-af1b-473c-ae3a-061098141a23-kube-api-access-l8xsh\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.012689 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cb934e2-af1b-473c-ae3a-061098141a23-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.012734 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cb934e2-af1b-473c-ae3a-061098141a23-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.021747 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cb934e2-af1b-473c-ae3a-061098141a23-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.030937 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xsh\" (UniqueName: \"kubernetes.io/projected/6cb934e2-af1b-473c-ae3a-061098141a23-kube-api-access-l8xsh\") pod \"ovnkube-control-plane-749d76644c-nntkz\" (UID: \"6cb934e2-af1b-473c-ae3a-061098141a23\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.035015 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.066254 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:20.799694 5739 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:53:20.799749 5739 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 16:53:20.799756 5739 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 16:53:20.799785 5739 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:53:20.799832 5739 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 16:53:20.799841 5739 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 16:53:20.799842 5739 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:20.799850 5739 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 16:53:20.799862 5739 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:53:20.799866 5739 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:53:20.799873 5739 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:53:20.799886 5739 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 16:53:20.799912 5739 factory.go:656] Stopping watch factory\\\\nI0123 16:53:20.799929 5739 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:53:20.799957 5739 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.103530 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.112685 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.112728 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.112739 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.112755 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.112765 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: W0123 16:53:23.119772 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cb934e2_af1b_473c_ae3a_061098141a23.slice/crio-2a115c1e667971bb848ff63ffc485f6c4f53320e47851ce69fe908eda0f6ac28 WatchSource:0}: Error finding container 2a115c1e667971bb848ff63ffc485f6c4f53320e47851ce69fe908eda0f6ac28: Status 404 returned error can't find the container with id 2a115c1e667971bb848ff63ffc485f6c4f53320e47851ce69fe908eda0f6ac28 Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.216206 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.216236 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.216247 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.216264 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.216276 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.315523 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.315728 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.315779 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:53:39.315740952 +0000 UTC m=+53.315933598 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.315859 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.315925 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.315948 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.315948 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.315980 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.315996 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316003 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316039 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316067 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:39.316054119 +0000 UTC m=+53.316246715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316085 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:39.316075199 +0000 UTC m=+53.316267605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316106 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:39.316095529 +0000 UTC m=+53.316287925 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316111 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316146 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316159 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.316214 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:53:39.316189171 +0000 UTC m=+53.316381577 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.318491 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.318533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.318545 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.318565 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.318579 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.351511 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:50:14.546036935 +0000 UTC Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.364854 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.364959 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.365013 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.364885 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.365165 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.365309 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.420962 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.421007 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.421018 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.421036 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.421048 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.523790 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.523840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.523849 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.523875 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.523897 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.583216 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.599165 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.613200 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.624502 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.626164 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.626191 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.626199 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.626215 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.626224 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.635942 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.647532 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.663276 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.663336 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.663349 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.663371 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.663387 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.663635 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.675842 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.675893 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.680210 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.680250 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.680263 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.680280 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.680293 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.689210 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.692091 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.695520 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.695578 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.695616 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.695637 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.695652 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.704305 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.706964 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.710751 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.710806 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.710824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.710845 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.710859 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.719233 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.725740 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.733052 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.733101 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.733113 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.733134 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.733150 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.735957 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/0.log" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.738668 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.739204 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.739373 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" event={"ID":"6cb934e2-af1b-473c-ae3a-061098141a23","Type":"ContainerStarted","Data":"2a115c1e667971bb848ff63ffc485f6c4f53320e47851ce69fe908eda0f6ac28"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.742394 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:20.799694 5739 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:53:20.799749 5739 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 16:53:20.799756 5739 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 16:53:20.799785 5739 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:53:20.799832 5739 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 16:53:20.799841 5739 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 16:53:20.799842 5739 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:20.799850 5739 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 16:53:20.799862 5739 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:53:20.799866 5739 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:53:20.799873 5739 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:53:20.799886 5739 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 16:53:20.799912 5739 factory.go:656] Stopping watch factory\\\\nI0123 16:53:20.799929 5739 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:53:20.799957 5739 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.753243 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: E0123 16:53:23.753366 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.754290 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.755200 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.755258 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.755271 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.755291 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.755308 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.768024 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.779997 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.791831 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.823857 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.841318 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.857861 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.857903 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.857912 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.857931 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.857943 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.858050 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.873953 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.887560 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.901203 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.921263 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.941547 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:20.799694 5739 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:53:20.799749 5739 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 16:53:20.799756 5739 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 16:53:20.799785 5739 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:53:20.799832 5739 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 16:53:20.799841 5739 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 16:53:20.799842 5739 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:20.799850 5739 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 16:53:20.799862 5739 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:53:20.799866 5739 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:53:20.799873 5739 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:53:20.799886 5739 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 16:53:20.799912 5739 factory.go:656] Stopping watch factory\\\\nI0123 16:53:20.799929 5739 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:53:20.799957 5739 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.958988 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.960786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.960814 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.960824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.960840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.960874 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:23Z","lastTransitionTime":"2026-01-23T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.974075 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:23 crc kubenswrapper[4606]: I0123 16:53:23.987732 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.002799 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.014743 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.030338 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.047738 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.063435 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.063500 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.063512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.063635 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.063650 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.166610 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.166658 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.166669 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.166685 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.166696 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.270270 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.270321 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.270332 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.270352 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.270367 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.280876 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qqwhl"] Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.281389 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:24 crc kubenswrapper[4606]: E0123 16:53:24.281496 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.297285 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.316850 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.337631 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.348615 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.352462 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:12:12.764220504 +0000 UTC Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.363101 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.374192 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.374245 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.374258 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.374618 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.374650 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.377225 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.390932 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.410702 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.425812 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.426269 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcnkr\" (UniqueName: \"kubernetes.io/projected/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-kube-api-access-tcnkr\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.427925 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.444369 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.457881 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.477148 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.477191 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.477201 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.477217 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.477235 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.480110 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:20.799694 5739 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:53:20.799749 5739 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 16:53:20.799756 5739 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 16:53:20.799785 5739 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:53:20.799832 5739 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 16:53:20.799841 5739 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 16:53:20.799842 5739 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:20.799850 5739 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 16:53:20.799862 5739 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:53:20.799866 5739 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:53:20.799873 5739 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:53:20.799886 5739 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 16:53:20.799912 5739 factory.go:656] Stopping watch factory\\\\nI0123 16:53:20.799929 5739 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:53:20.799957 5739 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.497932 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.512784 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.527171 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.527229 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcnkr\" (UniqueName: \"kubernetes.io/projected/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-kube-api-access-tcnkr\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:24 crc kubenswrapper[4606]: E0123 16:53:24.527629 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:24 crc kubenswrapper[4606]: E0123 16:53:24.527673 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:53:25.027659976 +0000 UTC m=+39.027852382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.534324 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.548266 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcnkr\" (UniqueName: \"kubernetes.io/projected/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-kube-api-access-tcnkr\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.557432 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.579378 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.579410 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.579424 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.579440 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.579452 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.681376 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.681422 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.681434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.681451 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.681462 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.744837 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" event={"ID":"6cb934e2-af1b-473c-ae3a-061098141a23","Type":"ContainerStarted","Data":"e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.744888 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" event={"ID":"6cb934e2-af1b-473c-ae3a-061098141a23","Type":"ContainerStarted","Data":"76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.746866 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/1.log" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.747791 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/0.log" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.755156 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51" exitCode=1 Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.755209 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.755249 4606 scope.go:117] "RemoveContainer" containerID="f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.756056 4606 scope.go:117] "RemoveContainer" containerID="88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51" Jan 23 16:53:24 crc kubenswrapper[4606]: E0123 16:53:24.756331 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.762466 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.776449 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.784374 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.784415 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.784425 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.784440 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.784449 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.791869 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.801371 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.815740 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.830189 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.844035 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.858023 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.871976 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.886730 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.886877 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.886906 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.886916 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.886941 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.886953 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.900942 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.920036 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:20.799694 5739 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:53:20.799749 5739 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 16:53:20.799756 5739 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 16:53:20.799785 5739 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:53:20.799832 5739 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 16:53:20.799841 5739 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 16:53:20.799842 5739 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:20.799850 5739 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 16:53:20.799862 5739 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:53:20.799866 5739 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:53:20.799873 5739 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:53:20.799886 5739 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 16:53:20.799912 5739 factory.go:656] Stopping watch factory\\\\nI0123 16:53:20.799929 5739 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:53:20.799957 5739 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.931018 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.943113 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.958144 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.969882 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.983766 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.988994 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.989033 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.989044 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.989064 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.989076 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:24Z","lastTransitionTime":"2026-01-23T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:24 crc kubenswrapper[4606]: I0123 16:53:24.999635 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.013213 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.025967 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.033025 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:25 crc kubenswrapper[4606]: E0123 16:53:25.033145 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:25 crc kubenswrapper[4606]: E0123 16:53:25.033192 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:53:26.033178649 +0000 UTC m=+40.033371055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.042015 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.055145 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.065823 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.079068 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.091635 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.091665 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.091675 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.091691 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.091702 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.095553 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.110068 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.123334 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.136973 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.150691 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.163943 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.183293 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8543756de22726f432a356e51ef75dfddf33eaa836c6dbcd44feaccbcbf368a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:20.799694 5739 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 16:53:20.799749 5739 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 16:53:20.799756 5739 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 16:53:20.799785 5739 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 16:53:20.799832 5739 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 16:53:20.799841 5739 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 16:53:20.799842 5739 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:20.799850 5739 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 16:53:20.799862 5739 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 16:53:20.799866 5739 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 16:53:20.799873 5739 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 16:53:20.799886 5739 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 16:53:20.799912 5739 factory.go:656] Stopping watch factory\\\\nI0123 16:53:20.799929 5739 ovnkube.go:599] Stopped ovnkube\\\\nI0123 16:53:20.799957 5739 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 16:53:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"is/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.868514 5936 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868561 5936 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.870618 5936 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868972 5936 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.871310 5936 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:23.869037 5936 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.869059 5936 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.872058 5936 factory.go:656] Stopping watch factory\\\\nI0123 16:53:23.872068 5936 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.194730 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.194799 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.194811 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.194827 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.194837 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.199649 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.298052 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.298109 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.298121 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.298141 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.298157 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.353488 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:53:00.583827846 +0000 UTC Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.365026 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:25 crc kubenswrapper[4606]: E0123 16:53:25.365196 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.365706 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.365803 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:25 crc kubenswrapper[4606]: E0123 16:53:25.365822 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:25 crc kubenswrapper[4606]: E0123 16:53:25.365953 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.404636 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.404683 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.404694 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.404709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.404722 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.507554 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.507824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.507833 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.507847 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.507858 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.610889 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.610932 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.610942 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.610957 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.610969 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.713280 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.713330 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.713343 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.713363 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.713377 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.760870 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/1.log" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.766435 4606 scope.go:117] "RemoveContainer" containerID="88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51" Jan 23 16:53:25 crc kubenswrapper[4606]: E0123 16:53:25.766698 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.789750 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.804692 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.815827 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.815875 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.815889 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.815912 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.815927 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.817675 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.833164 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.850508 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.865370 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.877378 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.887967 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.897900 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.908814 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.918939 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.919146 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.919204 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.919267 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.919352 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:25Z","lastTransitionTime":"2026-01-23T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.921222 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.940921 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"is/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.868514 5936 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868561 5936 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.870618 5936 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868972 5936 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.871310 5936 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:23.869037 5936 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.869059 5936 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.872058 5936 factory.go:656] Stopping watch factory\\\\nI0123 16:53:23.872068 5936 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.957535 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.967757 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.980290 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:25 crc kubenswrapper[4606]: I0123 16:53:25.994178 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.021603 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.021644 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.021655 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.021670 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.021680 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.042677 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:26 crc kubenswrapper[4606]: E0123 16:53:26.042875 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:26 crc kubenswrapper[4606]: E0123 16:53:26.042983 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:53:28.042961407 +0000 UTC m=+42.043153813 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.123619 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.123663 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.123676 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.123697 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.123712 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.226263 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.226509 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.226576 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.226677 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.226739 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.328508 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.328556 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.328564 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.328579 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.328616 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.354344 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:29:14.176110345 +0000 UTC Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.364659 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:26 crc kubenswrapper[4606]: E0123 16:53:26.365110 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.381220 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.395766 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.407216 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.419460 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.432500 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.432562 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.432574 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.432612 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.432626 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.436732 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.453821 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.465248 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.475774 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.487066 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.502624 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.521987 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.534627 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.535012 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.535046 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.535059 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.535076 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.535087 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.547375 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.561216 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.578963 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"is/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.868514 5936 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868561 5936 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.870618 5936 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868972 5936 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.871310 5936 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:23.869037 5936 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.869059 5936 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.872058 5936 factory.go:656] Stopping watch factory\\\\nI0123 16:53:23.872068 5936 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.598338 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.637169 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.637217 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.637225 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.637265 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.637281 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.740218 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.740257 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.740269 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.740284 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.740295 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.842958 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.843030 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.843043 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.843091 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.843103 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.945460 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.945504 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.945522 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.945541 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:26 crc kubenswrapper[4606]: I0123 16:53:26.945556 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:26Z","lastTransitionTime":"2026-01-23T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.047952 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.047993 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.048003 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.048018 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.048029 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.150434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.150495 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.150511 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.150538 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.150555 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.252848 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.252904 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.252919 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.252941 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.252992 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.354749 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:58:00.138027325 +0000 UTC Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.355542 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.355616 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.355635 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.355655 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.355672 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.365158 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.365178 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.365229 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:27 crc kubenswrapper[4606]: E0123 16:53:27.365333 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:27 crc kubenswrapper[4606]: E0123 16:53:27.365536 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:27 crc kubenswrapper[4606]: E0123 16:53:27.365727 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.458886 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.459012 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.459038 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.459073 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.459099 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.562099 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.562136 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.562144 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.562158 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.562168 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.665049 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.665076 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.665083 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.665095 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.665103 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.768253 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.768285 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.768312 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.768326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.768334 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.871139 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.871178 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.871213 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.871226 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.871235 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.973824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.973901 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.973924 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.973972 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:27 crc kubenswrapper[4606]: I0123 16:53:27.973989 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:27Z","lastTransitionTime":"2026-01-23T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.063746 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:28 crc kubenswrapper[4606]: E0123 16:53:28.063923 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:28 crc kubenswrapper[4606]: E0123 16:53:28.063985 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:53:32.063967018 +0000 UTC m=+46.064159424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.076551 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.076651 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.076662 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.076682 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.076693 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.179931 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.179964 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.179972 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.179985 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.179994 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.282048 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.282359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.282477 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.282571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.282738 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.355011 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:00:46.494826174 +0000 UTC Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.365492 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:28 crc kubenswrapper[4606]: E0123 16:53:28.365766 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.385728 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.385816 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.385833 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.385887 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.385906 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.488553 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.488613 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.488625 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.488642 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.488653 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.591762 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.591802 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.591812 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.591829 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.591842 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.694066 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.694104 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.694115 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.694132 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.694145 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.795843 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.795891 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.795904 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.795919 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.795930 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.898367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.899034 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.899111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.899184 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:28 crc kubenswrapper[4606]: I0123 16:53:28.899241 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:28Z","lastTransitionTime":"2026-01-23T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.001810 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.001846 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.001857 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.001872 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.001882 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.106694 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.106767 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.106794 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.106828 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.106861 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.209390 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.209441 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.209456 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.209478 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.209505 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.312342 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.312391 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.312406 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.312427 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.312478 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.355578 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:06:55.959647483 +0000 UTC Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.365051 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:29 crc kubenswrapper[4606]: E0123 16:53:29.365217 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.365308 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.365445 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:29 crc kubenswrapper[4606]: E0123 16:53:29.365647 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:29 crc kubenswrapper[4606]: E0123 16:53:29.365852 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.414841 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.414887 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.414898 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.414920 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.414936 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.516986 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.517216 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.517292 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.517375 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.517446 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.619359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.619400 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.619412 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.619427 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.619439 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.722279 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.722323 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.722336 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.722387 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.722400 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.824772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.824812 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.824823 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.824839 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.824849 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.928317 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.928356 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.928368 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.928384 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:29 crc kubenswrapper[4606]: I0123 16:53:29.928396 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:29Z","lastTransitionTime":"2026-01-23T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.031061 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.031120 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.031145 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.031174 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.031199 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.134485 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.134523 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.134533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.134547 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.134555 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.237733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.237981 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.238045 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.238108 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.238166 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.340557 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.340869 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.340974 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.341057 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.341128 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.356171 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:08:03.564654741 +0000 UTC Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.365784 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:30 crc kubenswrapper[4606]: E0123 16:53:30.366385 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.442888 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.443114 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.443172 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.443231 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.443285 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.546278 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.546319 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.546334 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.546351 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.546361 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.648722 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.648761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.648770 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.648786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.648796 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.751298 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.751334 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.751343 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.751359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.751371 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.854011 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.854057 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.854069 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.854086 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.854098 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.957088 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.957926 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.958029 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.958186 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:30 crc kubenswrapper[4606]: I0123 16:53:30.958314 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:30Z","lastTransitionTime":"2026-01-23T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.062880 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.062924 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.062934 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.062949 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.062959 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.165392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.165446 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.165463 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.165487 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.165504 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.268509 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.268538 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.268547 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.268560 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.268569 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.357070 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:58:15.53589508 +0000 UTC Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.365519 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.365548 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:31 crc kubenswrapper[4606]: E0123 16:53:31.365734 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:31 crc kubenswrapper[4606]: E0123 16:53:31.365828 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.365551 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:31 crc kubenswrapper[4606]: E0123 16:53:31.365952 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.371520 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.371631 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.371657 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.371686 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.371712 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.473743 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.473809 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.473833 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.473869 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.473891 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.576480 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.576523 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.576536 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.576551 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.576562 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.678540 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.678609 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.678621 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.678654 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.678665 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.782068 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.782146 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.782158 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.782173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.782184 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.885126 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.885162 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.885172 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.885189 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.885202 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.987201 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.987236 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.987244 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.987258 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:31 crc kubenswrapper[4606]: I0123 16:53:31.987268 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:31Z","lastTransitionTime":"2026-01-23T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.090239 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.090337 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.090366 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.090401 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.090438 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.121320 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:32 crc kubenswrapper[4606]: E0123 16:53:32.121533 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:32 crc kubenswrapper[4606]: E0123 16:53:32.121821 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:53:40.121780392 +0000 UTC m=+54.121972838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.193769 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.193819 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.193831 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.193860 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.193871 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.296293 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.296375 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.296411 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.296444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.296467 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.357207 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:42:50.181082007 +0000 UTC Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.365796 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:32 crc kubenswrapper[4606]: E0123 16:53:32.366082 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.399677 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.399709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.399718 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.399732 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.399741 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.502655 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.502710 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.502729 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.502748 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.502791 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.605489 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.605544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.605554 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.605572 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.605584 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.708833 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.708885 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.708902 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.708923 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.708940 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.811326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.811370 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.811386 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.811407 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.811423 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.913732 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.913786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.913795 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.913808 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:32 crc kubenswrapper[4606]: I0123 16:53:32.913817 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:32Z","lastTransitionTime":"2026-01-23T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.016071 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.016119 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.016130 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.016148 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.016157 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.119284 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.119352 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.119371 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.119396 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.119416 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.222020 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.222054 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.222066 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.222082 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.222092 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.327160 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.327221 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.327243 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.327272 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.327295 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.358483 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:37:02.730423362 +0000 UTC Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.365692 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.365830 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.365708 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.365930 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.365701 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.365991 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.429681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.429723 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.429735 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.429750 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.429761 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.533170 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.533280 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.533302 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.533337 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.533356 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.636691 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.636747 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.636765 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.636788 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.636806 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.739124 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.739222 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.739239 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.739265 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.739287 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.841162 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.841199 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.841208 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.841225 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.841238 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.910136 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.910198 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.910207 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.910222 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.910231 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.923366 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.927339 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.927400 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.927421 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.927450 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.927474 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.946001 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.954909 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.954958 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.954970 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.954988 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.955000 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.968498 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.972665 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.972704 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.972712 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.972726 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.972735 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.983667 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.986883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.986919 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.986929 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.986941 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.986949 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:33Z","lastTransitionTime":"2026-01-23T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.998294 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:33Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:33 crc kubenswrapper[4606]: E0123 16:53:33.998417 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:53:33 crc kubenswrapper[4606]: I0123 16:53:33.999939 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.000068 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.000149 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.000217 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.000283 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.102239 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.102289 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.102300 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.102317 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.102328 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.204729 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.204772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.204784 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.204799 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.204809 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.307458 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.307514 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.307525 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.307540 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.307550 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.359106 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:01:56.613258302 +0000 UTC Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.365680 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:34 crc kubenswrapper[4606]: E0123 16:53:34.365850 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.410254 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.410306 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.410318 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.410335 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.410347 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.513090 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.513160 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.513183 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.513213 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.513241 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.616775 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.616834 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.616850 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.616873 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.616890 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.719658 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.719723 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.719746 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.719774 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.719797 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.822389 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.822431 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.822444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.822459 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.822470 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.924985 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.925048 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.925060 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.925078 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:34 crc kubenswrapper[4606]: I0123 16:53:34.925088 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:34Z","lastTransitionTime":"2026-01-23T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.027323 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.027640 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.027768 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.027862 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.027955 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.130876 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.130922 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.130931 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.130948 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.130961 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.234255 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.234702 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.234827 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.234924 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.235013 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.338819 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.338871 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.338883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.338902 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.338918 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.359853 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:16:11.777325015 +0000 UTC Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.365243 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:35 crc kubenswrapper[4606]: E0123 16:53:35.365357 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.365246 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.365541 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:35 crc kubenswrapper[4606]: E0123 16:53:35.365582 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:35 crc kubenswrapper[4606]: E0123 16:53:35.365885 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.441250 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.441296 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.441309 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.441327 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.441344 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.543532 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.543709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.543726 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.543742 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.543753 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.652038 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.652139 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.652152 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.652177 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.652192 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.755077 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.755128 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.755142 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.755164 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.755178 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.857767 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.857802 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.857811 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.857825 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.857836 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.960450 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.960715 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.960746 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.960761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:35 crc kubenswrapper[4606]: I0123 16:53:35.960770 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:35Z","lastTransitionTime":"2026-01-23T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.063398 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.063450 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.063463 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.063480 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.063492 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.165648 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.165699 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.165709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.165724 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.165733 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.268184 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.268283 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.268291 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.268317 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.268332 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.360326 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:59:08.825130801 +0000 UTC Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.367645 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:36 crc kubenswrapper[4606]: E0123 16:53:36.367791 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.369766 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.369796 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.369805 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.369818 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.369828 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.380511 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.392087 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.400441 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.409815 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.424401 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.437810 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.447808 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.457807 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.466896 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.471671 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.471714 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.471724 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.471737 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.471746 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.479551 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.492640 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.506565 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.527068 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.539066 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.556168 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"is/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.868514 5936 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868561 5936 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.870618 5936 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868972 5936 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.871310 5936 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:23.869037 5936 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.869059 5936 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.872058 5936 factory.go:656] Stopping watch factory\\\\nI0123 16:53:23.872068 5936 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.565552 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.574835 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.574900 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.574911 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.574925 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.574937 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.677616 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.677971 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.678127 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.678259 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.678370 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.782273 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.782314 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.782326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.782343 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.782353 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.884776 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.884813 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.884824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.884840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.884851 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.987424 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.987473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.987489 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.987512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:36 crc kubenswrapper[4606]: I0123 16:53:36.987528 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:36Z","lastTransitionTime":"2026-01-23T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.118457 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.118527 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.118545 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.118568 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.118623 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.221239 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.222293 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.222510 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.222780 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.222984 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.325490 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.325654 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.325681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.325710 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.325730 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.360870 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:53:19.42529451 +0000 UTC Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.365417 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.365424 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:37 crc kubenswrapper[4606]: E0123 16:53:37.365670 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.365435 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:37 crc kubenswrapper[4606]: E0123 16:53:37.365844 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:37 crc kubenswrapper[4606]: E0123 16:53:37.365884 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.428368 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.428493 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.428552 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.428581 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.428644 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.532092 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.532173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.532191 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.532214 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.532262 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.635359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.635782 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.635957 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.636162 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.636308 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.738626 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.738672 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.738696 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.738716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.738729 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.840465 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.840509 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.840551 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.840575 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.840638 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.942797 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.943046 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.943194 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.943356 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:37 crc kubenswrapper[4606]: I0123 16:53:37.943499 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:37Z","lastTransitionTime":"2026-01-23T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.046441 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.046510 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.046532 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.046560 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.046581 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.148833 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.148874 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.148883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.148895 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.148903 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.251474 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.251540 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.251561 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.251622 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.251646 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.354071 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.354370 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.354522 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.354734 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.354977 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.362381 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:09:50.45873356 +0000 UTC Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.364711 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:38 crc kubenswrapper[4606]: E0123 16:53:38.364868 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.457627 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.457660 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.457672 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.457687 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.457698 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.542927 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.551407 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.558629 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.561305 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.561346 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.561359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.561379 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.561400 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.575300 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.593330 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.606060 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.623384 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.639226 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.655321 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.663889 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.663915 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.663924 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.663936 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.663947 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.673313 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.689853 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.707434 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.724454 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.754444 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"is/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.868514 5936 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868561 5936 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.870618 5936 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868972 5936 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.871310 5936 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:23.869037 5936 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.869059 5936 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.872058 5936 factory.go:656] Stopping watch factory\\\\nI0123 16:53:23.872068 5936 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.766960 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.767258 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.767364 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.767468 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.768215 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.771044 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.785732 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.797621 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.810087 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.871185 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.871226 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.871237 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.871252 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.871264 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.974279 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.974610 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.974724 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.974824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:38 crc kubenswrapper[4606]: I0123 16:53:38.974915 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:38Z","lastTransitionTime":"2026-01-23T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.077512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.077546 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.077557 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.077571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.077608 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.180654 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.180704 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.180718 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.180737 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.180749 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.283668 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.283731 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.283744 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.283762 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.283774 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.335313 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.335433 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.335456 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.335484 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.335519 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.335633 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.335680 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:54:11.335667161 +0000 UTC m=+85.335859567 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.335850 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:54:11.335841904 +0000 UTC m=+85.336034310 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.335918 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.335947 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:54:11.335939476 +0000 UTC m=+85.336131882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336187 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336201 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336211 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336235 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:54:11.336228392 +0000 UTC m=+85.336420798 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336748 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336762 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336771 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.336813 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:54:11.336803294 +0000 UTC m=+85.336995700 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.363063 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:24:00.934956655 +0000 UTC Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.365384 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.365403 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.365513 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.365646 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.365403 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:39 crc kubenswrapper[4606]: E0123 16:53:39.365734 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.386303 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.386346 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.386357 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.386377 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.386388 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.490117 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.490172 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.490184 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.490209 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.490226 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.593653 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.593942 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.594005 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.594088 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.594148 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.698141 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.698518 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.698803 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.699115 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.699317 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.803187 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.803229 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.803240 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.803256 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.803267 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.906051 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.906352 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.906421 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.906508 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:39 crc kubenswrapper[4606]: I0123 16:53:39.906576 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:39Z","lastTransitionTime":"2026-01-23T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.010513 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.010926 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.011131 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.011366 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.011572 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.114617 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.114673 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.114689 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.114707 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.114721 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.144088 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:40 crc kubenswrapper[4606]: E0123 16:53:40.144260 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:40 crc kubenswrapper[4606]: E0123 16:53:40.144356 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:53:56.144329776 +0000 UTC m=+70.144522232 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.217486 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.217543 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.217557 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.217603 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.217621 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.320871 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.320921 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.320933 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.320953 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.320969 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.363605 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:35:19.334575731 +0000 UTC Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.364994 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:40 crc kubenswrapper[4606]: E0123 16:53:40.365175 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.365935 4606 scope.go:117] "RemoveContainer" containerID="88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.423526 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.423651 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.423679 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.423710 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.423735 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.531682 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.531731 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.531741 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.531759 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.531770 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.633782 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.633821 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.633830 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.633842 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.633851 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.735766 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.735797 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.735806 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.735820 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.735831 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.813702 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/1.log" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.815835 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.816488 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.838177 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.838387 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.838507 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.838624 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.838729 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.839047 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.852510 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.864109 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.876057 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.887978 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.901833 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.918545 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.930082 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.941006 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.941049 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.941061 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.941076 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.941086 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:40Z","lastTransitionTime":"2026-01-23T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.943918 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.961319 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.975288 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:40 crc kubenswrapper[4606]: I0123 16:53:40.989205 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:40Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.002455 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.013465 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.024937 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.042888 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"is/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.868514 5936 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868561 5936 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.870618 5936 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868972 5936 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.871310 5936 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:23.869037 5936 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.869059 5936 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.872058 5936 factory.go:656] Stopping watch factory\\\\nI0123 16:53:23.872068 5936 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.043447 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.043493 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.043503 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.043520 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.043532 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.056259 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.145972 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.146262 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.146337 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.146424 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.146510 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.250112 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.250182 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.250198 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.250223 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.250240 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.353688 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.355120 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.355322 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.355552 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.355845 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.364322 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:16:17.47571337 +0000 UTC Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.364898 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.364915 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.364933 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:41 crc kubenswrapper[4606]: E0123 16:53:41.365042 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:41 crc kubenswrapper[4606]: E0123 16:53:41.365232 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:41 crc kubenswrapper[4606]: E0123 16:53:41.365334 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.458038 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.458077 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.458130 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.458151 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.458161 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.561029 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.561080 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.561092 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.561108 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.561153 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.664499 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.664550 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.664564 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.664610 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.664622 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.766772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.766810 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.766821 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.766840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.766855 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.821788 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/2.log" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.822361 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/1.log" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.825679 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79" exitCode=1 Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.825715 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.825747 4606 scope.go:117] "RemoveContainer" containerID="88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.826352 4606 scope.go:117] "RemoveContainer" containerID="5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79" Jan 23 16:53:41 crc kubenswrapper[4606]: E0123 16:53:41.826481 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.841933 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.857446 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.869060 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.869108 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.869117 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.869132 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.869144 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.875612 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.892820 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.905668 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.922677 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.944250 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.963208 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fcfccc6c7e01177a9f638bfe9387bfaeed50de2ba540b29910bf3d23046e51\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"message\\\":\\\"is/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.868514 5936 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868561 5936 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.870618 5936 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 16:53:23.868972 5936 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.871310 5936 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 16:53:23.869037 5936 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 16:53:23.869059 5936 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 16:53:23.872058 5936 factory.go:656] Stopping watch factory\\\\nI0123 16:53:23.872068 5936 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.971512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.971560 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.971572 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.971608 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.971620 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:41Z","lastTransitionTime":"2026-01-23T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.974751 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:41 crc kubenswrapper[4606]: I0123 16:53:41.987209 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.001127 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.012400 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.023637 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.038376 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.057357 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.072394 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.074307 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.074339 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.074351 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.074367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.074379 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.086670 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.177807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.177867 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.177884 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.177911 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.177930 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.280610 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.280659 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.280671 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.280691 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.280705 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.364722 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:04:26.975828601 +0000 UTC Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.364879 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:42 crc kubenswrapper[4606]: E0123 16:53:42.365036 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.382790 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.382836 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.382849 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.382869 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.382885 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.485531 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.485647 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.485673 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.485704 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.485727 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.588103 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.588163 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.588182 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.588206 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.588223 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.690773 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.690807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.690816 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.690832 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.690842 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.793948 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.793989 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.793998 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.794012 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.794022 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.830495 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/2.log" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.834672 4606 scope.go:117] "RemoveContainer" containerID="5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79" Jan 23 16:53:42 crc kubenswrapper[4606]: E0123 16:53:42.834949 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.853246 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.866281 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.880534 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.896392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.896432 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.896443 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.896459 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.896470 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.897096 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.907636 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.925307 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.940417 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.952069 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.966246 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.976583 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.986854 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.998507 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.998541 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.998550 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.998563 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.998572 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:42Z","lastTransitionTime":"2026-01-23T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:42 crc kubenswrapper[4606]: I0123 16:53:42.999740 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:42Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.018083 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.031394 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.044344 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.056375 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.070407 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.100497 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.100541 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.100552 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.100569 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.100580 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:43Z","lastTransitionTime":"2026-01-23T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.203666 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.203717 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.203729 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.203749 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.203762 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:43Z","lastTransitionTime":"2026-01-23T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.364971 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.365066 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:20:03.93393605 +0000 UTC Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.365002 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.364991 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:43 crc kubenswrapper[4606]: E0123 16:53:43.365157 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:43 crc kubenswrapper[4606]: E0123 16:53:43.365248 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:43 crc kubenswrapper[4606]: E0123 16:53:43.365506 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.954777 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.955155 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.955333 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.955537 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:43 crc kubenswrapper[4606]: I0123 16:53:43.955835 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:43Z","lastTransitionTime":"2026-01-23T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.058067 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.058120 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.058132 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.058148 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.058160 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.161631 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.161701 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.161720 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.161748 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.161768 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.264424 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.264452 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.264460 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.264472 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.264481 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.305184 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.305257 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.305277 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.305300 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.305319 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: E0123 16:53:44.321312 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.325205 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.325252 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.325262 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.325274 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.325283 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: E0123 16:53:44.336486 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.340176 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.340203 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.340230 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.340243 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.340252 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: E0123 16:53:44.352671 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.355830 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.355851 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.355860 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.355876 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.355910 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.365197 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:44 crc kubenswrapper[4606]: E0123 16:53:44.365301 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.365512 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:26:06.695654942 +0000 UTC Jan 23 16:53:44 crc kubenswrapper[4606]: E0123 16:53:44.367702 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.372233 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.372258 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.372267 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.372277 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.372285 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: E0123 16:53:44.387677 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:44 crc kubenswrapper[4606]: E0123 16:53:44.387821 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.389617 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.389645 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.389656 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.389674 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.389685 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.491446 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.491850 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.491876 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.491895 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.491905 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.594399 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.594462 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.594473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.594488 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.594496 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.697071 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.697153 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.697166 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.697183 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.697196 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.799262 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.799292 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.799300 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.799315 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.799324 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.901876 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.901914 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.901922 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.901934 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:44 crc kubenswrapper[4606]: I0123 16:53:44.901943 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:44Z","lastTransitionTime":"2026-01-23T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.003900 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.003931 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.003939 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.003978 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.003987 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.106770 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.106839 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.106854 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.106886 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.106900 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.209857 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.209906 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.209917 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.209935 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.209948 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.312520 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.312640 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.312658 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.312679 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.312694 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.364728 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.364785 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:45 crc kubenswrapper[4606]: E0123 16:53:45.364849 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.364906 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:45 crc kubenswrapper[4606]: E0123 16:53:45.364916 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:45 crc kubenswrapper[4606]: E0123 16:53:45.364945 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.365660 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:02:57.361566612 +0000 UTC Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.415876 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.415933 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.415950 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.415976 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.415994 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.519026 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.519055 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.519063 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.519077 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.519086 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.621223 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.621266 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.621278 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.621294 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.621304 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.724116 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.724551 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.724733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.724869 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.725012 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.827795 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.827853 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.827861 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.827875 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.827884 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.930045 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.930089 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.930106 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.930131 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:45 crc kubenswrapper[4606]: I0123 16:53:45.930148 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:45Z","lastTransitionTime":"2026-01-23T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.032532 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.032567 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.032576 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.032622 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.032638 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.134747 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.134808 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.134819 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.134832 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.134840 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.237890 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.238187 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.238421 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.238740 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.238921 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.342275 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.342308 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.342318 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.342331 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.342342 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.364825 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:46 crc kubenswrapper[4606]: E0123 16:53:46.364957 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.365960 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:42:46.939966231 +0000 UTC Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.387519 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.403197 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.425108 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.445027 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.445724 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.445776 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.445789 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.445805 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.445817 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.459565 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.473636 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.488384 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.503824 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.524068 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.545689 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.552796 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.552853 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.552870 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.552893 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.552911 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.562021 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.577064 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.595474 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.610034 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.622746 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.640962 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.655600 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.656138 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.656251 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.656361 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.656482 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.656611 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.759285 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.759345 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.759359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.759380 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.759392 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.862312 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.862732 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.862745 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.862760 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.862771 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.964545 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.964605 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.964614 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.964628 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:46 crc kubenswrapper[4606]: I0123 16:53:46.964637 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:46Z","lastTransitionTime":"2026-01-23T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.067012 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.067049 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.067058 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.067073 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.067082 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.170473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.170517 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.170528 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.170544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.170556 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.272936 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.272972 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.272984 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.272996 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.273004 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.365167 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.365176 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:47 crc kubenswrapper[4606]: E0123 16:53:47.365512 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:47 crc kubenswrapper[4606]: E0123 16:53:47.365337 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.365197 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:47 crc kubenswrapper[4606]: E0123 16:53:47.365654 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.366151 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:10:40.167884593 +0000 UTC Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.375508 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.375566 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.375621 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.375647 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.375667 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.480120 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.480147 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.480156 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.480169 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.480178 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.583534 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.583626 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.583649 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.583677 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.583702 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.686206 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.686275 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.686294 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.686320 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.686342 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.792839 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.793111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.793181 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.793249 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.793309 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.896766 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.896819 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.896837 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.896860 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.896877 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.999461 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.999518 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.999537 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.999567 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:47 crc kubenswrapper[4606]: I0123 16:53:47.999620 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:47Z","lastTransitionTime":"2026-01-23T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.102056 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.102125 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.102151 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.102180 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.102201 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.205062 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.205114 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.205129 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.205147 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.205159 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.308371 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.308454 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.308479 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.308517 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.308543 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.364722 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:48 crc kubenswrapper[4606]: E0123 16:53:48.364917 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.366866 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:18:09.244684488 +0000 UTC Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.411340 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.411386 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.411399 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.411419 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.411433 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.513524 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.513579 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.513613 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.513631 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.513644 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.616104 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.616144 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.616155 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.616170 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.616179 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.718227 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.718265 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.718273 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.718287 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.718298 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.821462 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.821542 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.821570 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.821637 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.821664 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.924454 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.924512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.924527 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.924548 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:48 crc kubenswrapper[4606]: I0123 16:53:48.924564 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:48Z","lastTransitionTime":"2026-01-23T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.027308 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.027346 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.027354 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.027369 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.027397 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.130236 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.130283 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.130295 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.130314 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.130329 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.233408 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.233461 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.233479 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.233502 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.233520 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.336569 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.336705 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.336720 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.336747 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.336761 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.364850 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.365034 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:49 crc kubenswrapper[4606]: E0123 16:53:49.365195 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.364862 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:49 crc kubenswrapper[4606]: E0123 16:53:49.365466 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:49 crc kubenswrapper[4606]: E0123 16:53:49.365058 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.368010 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:37:01.492028426 +0000 UTC Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.440197 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.440257 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.440269 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.440287 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.440299 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.542566 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.542674 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.542685 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.542738 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.542752 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.645407 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.645785 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.645932 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.646006 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.646075 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.749653 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.749708 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.749719 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.749737 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.749750 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.852842 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.852891 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.852908 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.852932 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.852948 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.956160 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.956458 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.956521 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.956629 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:49 crc kubenswrapper[4606]: I0123 16:53:49.956695 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:49Z","lastTransitionTime":"2026-01-23T16:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.059393 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.059770 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.059889 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.059963 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.060035 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.162276 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.162312 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.162320 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.162335 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.162347 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.265897 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.265948 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.265960 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.265986 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.266004 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.365323 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:50 crc kubenswrapper[4606]: E0123 16:53:50.365465 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.368356 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:53:37.584907076 +0000 UTC Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.369092 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.369138 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.369149 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.369166 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.369177 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.471726 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.471773 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.471790 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.471820 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.471838 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.582197 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.582242 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.582254 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.582270 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.582280 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.684997 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.685037 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.685047 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.685061 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.685070 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.787392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.787434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.787444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.787460 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.787472 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.891409 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.891455 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.891464 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.891482 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.891492 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.995859 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.995900 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.995909 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.995924 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:50 crc kubenswrapper[4606]: I0123 16:53:50.995933 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:50Z","lastTransitionTime":"2026-01-23T16:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.099647 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.099700 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.099711 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.099733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.099745 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.203223 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.203271 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.203305 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.203323 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.203336 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.307908 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.307988 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.308003 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.308025 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.308039 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.365837 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.372096 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:42:11.377006447 +0000 UTC Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.373687 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:51 crc kubenswrapper[4606]: E0123 16:53:51.373803 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.377015 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:51 crc kubenswrapper[4606]: E0123 16:53:51.377110 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:51 crc kubenswrapper[4606]: E0123 16:53:51.377164 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.411750 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.412301 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.412448 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.412549 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.412685 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.515637 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.516002 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.516079 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.516157 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.516243 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.619698 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.619746 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.619756 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.619776 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.619788 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.723245 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.723313 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.723335 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.723367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.723388 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.825272 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.825319 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.825331 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.825349 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.825360 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.927675 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.927716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.927728 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.927742 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:51 crc kubenswrapper[4606]: I0123 16:53:51.927750 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:51Z","lastTransitionTime":"2026-01-23T16:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.029955 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.030019 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.030043 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.030062 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.030074 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.132376 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.132449 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.132461 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.132478 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.132491 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.234833 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.234865 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.234877 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.234894 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.234905 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.336914 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.336954 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.336964 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.336978 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.336986 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.365332 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:52 crc kubenswrapper[4606]: E0123 16:53:52.365471 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.372694 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:17:33.265520214 +0000 UTC Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.439498 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.439525 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.439534 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.439547 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.439555 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.541489 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.541529 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.541540 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.541559 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.541571 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.643748 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.643812 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.643824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.643839 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.643848 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.745975 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.746013 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.746024 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.746040 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.746054 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.848367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.848409 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.848419 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.848434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.848447 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.950748 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.950791 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.950800 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.950820 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:52 crc kubenswrapper[4606]: I0123 16:53:52.950829 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:52Z","lastTransitionTime":"2026-01-23T16:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.052618 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.052652 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.052663 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.052677 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.052686 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.154612 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.154671 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.154687 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.154700 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.154709 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.256950 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.256998 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.257007 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.257022 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.257033 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.358990 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.359031 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.359044 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.359060 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.359071 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.365336 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.365552 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.365718 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:53 crc kubenswrapper[4606]: E0123 16:53:53.365707 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:53 crc kubenswrapper[4606]: E0123 16:53:53.365895 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:53 crc kubenswrapper[4606]: E0123 16:53:53.365840 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.373056 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:37:18.637615902 +0000 UTC Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.461280 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.461618 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.461754 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.461860 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.461951 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.564940 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.565000 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.565021 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.565039 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.565048 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.667185 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.667225 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.667235 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.667248 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.667257 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.769286 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.769326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.769334 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.769348 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.769357 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.871372 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.871404 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.871413 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.871425 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.871437 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.974127 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.974159 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.974167 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.974182 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:53 crc kubenswrapper[4606]: I0123 16:53:53.974191 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:53Z","lastTransitionTime":"2026-01-23T16:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.076638 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.076683 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.076694 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.076710 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.076719 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.178701 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.178745 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.178755 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.178772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.178784 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.280662 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.280697 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.280705 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.280718 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.280726 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.364899 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.365403 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.365717 4606 scope.go:117] "RemoveContainer" containerID="5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79" Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.365884 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.373639 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:34:06.208852069 +0000 UTC Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.383852 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.383910 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.383923 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.383942 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.383953 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.486213 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.486252 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.486262 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.486277 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.486290 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.588764 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.588834 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.588844 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.588913 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.588936 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.691810 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.691875 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.691889 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.691905 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.691973 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.735322 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.735375 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.735387 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.735403 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.735415 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.752164 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.755632 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.755672 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.755682 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.755697 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.755708 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.767730 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.770767 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.770987 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.770997 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.771008 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.771016 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.781155 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.784061 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.784094 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.784105 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.784121 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.784134 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.794872 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.803623 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.803664 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.803675 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.803689 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.803698 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.814090 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:54Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:54 crc kubenswrapper[4606]: E0123 16:53:54.814250 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.815882 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.815937 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.815950 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.815967 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.815981 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.917918 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.917956 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.917965 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.917980 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:54 crc kubenswrapper[4606]: I0123 16:53:54.917989 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:54Z","lastTransitionTime":"2026-01-23T16:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.020173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.020217 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.020226 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.020245 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.020256 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.121877 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.121909 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.121919 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.121931 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.121939 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.224433 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.224474 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.224485 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.224501 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.224512 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.327033 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.327081 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.327094 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.327109 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.327119 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.365446 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.365601 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:55 crc kubenswrapper[4606]: E0123 16:53:55.365719 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.365781 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:55 crc kubenswrapper[4606]: E0123 16:53:55.365868 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:55 crc kubenswrapper[4606]: E0123 16:53:55.366001 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.373952 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:26:17.943368851 +0000 UTC Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.429758 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.429789 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.429817 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.429830 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.429839 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.531765 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.531815 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.531830 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.531849 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.531860 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.634726 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.634794 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.634808 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.634828 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.634839 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.737100 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.737135 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.737143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.737154 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.737164 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.839413 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.839447 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.839457 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.839470 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.839479 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.941935 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.941982 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.941991 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.942009 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:55 crc kubenswrapper[4606]: I0123 16:53:55.942019 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:55Z","lastTransitionTime":"2026-01-23T16:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.044239 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.044278 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.044287 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.044301 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.044311 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.147148 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.147184 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.147192 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.147205 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.147213 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.209126 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:56 crc kubenswrapper[4606]: E0123 16:53:56.209313 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:56 crc kubenswrapper[4606]: E0123 16:53:56.209381 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:54:28.209359301 +0000 UTC m=+102.209551717 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.250382 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.250423 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.250434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.250451 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.250462 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.353444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.353487 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.353500 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.353517 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.353528 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.365005 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:56 crc kubenswrapper[4606]: E0123 16:53:56.365176 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.375722 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:24:10.864366485 +0000 UTC Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.387425 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.403848 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.413014 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.425958 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.438202 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.450571 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.455274 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.455311 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.455320 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.455338 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.455347 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.462638 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.476533 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.487675 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.499812 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.518887 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.537537 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.552239 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.559155 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.559444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.559519 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.559615 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.559683 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.567359 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.578766 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.593170 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.662046 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.662098 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.662110 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.662131 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.662144 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.669301 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.764809 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.764875 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.764894 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.764918 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.764935 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.868208 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.868250 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.868261 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.868278 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.868289 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.971324 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.971574 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.971684 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.971761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:56 crc kubenswrapper[4606]: I0123 16:53:56.971784 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:56Z","lastTransitionTime":"2026-01-23T16:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.073902 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.073941 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.073953 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.073968 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.073978 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.176315 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.176341 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.176349 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.176362 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.176370 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.284190 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.284221 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.284229 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.284243 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.284252 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.364806 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.364886 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.364816 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:57 crc kubenswrapper[4606]: E0123 16:53:57.364949 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:57 crc kubenswrapper[4606]: E0123 16:53:57.365080 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:57 crc kubenswrapper[4606]: E0123 16:53:57.365153 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.375957 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:11:52.971297518 +0000 UTC Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.386219 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.386293 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.386303 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.386316 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.386324 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.488536 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.488571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.488583 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.488617 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.488627 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.590909 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.590936 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.590946 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.590961 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.590973 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.694000 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.694032 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.694041 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.694054 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.694063 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.796450 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.796475 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.796482 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.796495 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.796503 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.898776 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.898815 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.898827 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.898842 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:57 crc kubenswrapper[4606]: I0123 16:53:57.898852 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:57Z","lastTransitionTime":"2026-01-23T16:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.000740 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.000800 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.000814 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.000832 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.000843 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.103115 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.103152 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.103161 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.103174 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.103182 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.205468 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.205502 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.205511 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.205523 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.205533 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.308030 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.308067 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.308075 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.308091 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.308100 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.364816 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:53:58 crc kubenswrapper[4606]: E0123 16:53:58.364998 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.376215 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:27:30.469344285 +0000 UTC Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.410052 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.410086 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.410097 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.410112 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.410123 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.512527 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.512555 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.512563 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.512577 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.512605 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.615109 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.615147 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.615156 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.615168 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.615176 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.717403 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.717441 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.717451 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.717467 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.717479 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.819418 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.819473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.819489 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.819512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.819528 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.922999 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.923064 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.923074 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.923087 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:58 crc kubenswrapper[4606]: I0123 16:53:58.923097 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:58Z","lastTransitionTime":"2026-01-23T16:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.026189 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.026249 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.026259 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.026275 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.026288 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.128828 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.128909 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.128923 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.128940 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.128954 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.231135 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.231480 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.231571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.231684 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.231780 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.333739 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.333783 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.333793 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.333807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.333817 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.365772 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.365797 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:53:59 crc kubenswrapper[4606]: E0123 16:53:59.365964 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:53:59 crc kubenswrapper[4606]: E0123 16:53:59.366094 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.366539 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:53:59 crc kubenswrapper[4606]: E0123 16:53:59.366823 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.377233 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:49:21.67600951 +0000 UTC Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.436473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.436510 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.436519 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.436534 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.436544 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.538492 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.538536 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.538544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.538558 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.538568 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.640685 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.640924 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.641014 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.641080 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.641151 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.743193 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.743394 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.743457 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.743549 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.743628 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.845298 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.845604 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.845733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.845813 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.845876 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.948070 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.948374 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.948455 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.948546 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:53:59 crc kubenswrapper[4606]: I0123 16:53:59.948629 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:53:59Z","lastTransitionTime":"2026-01-23T16:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.002223 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/0.log" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.002267 4606 generic.go:334] "Generic (PLEG): container finished" podID="2692a882-403f-4842-b756-53ed7a38fa7f" containerID="7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b" exitCode=1 Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.002296 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerDied","Data":"7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.002739 4606 scope.go:117] "RemoveContainer" containerID="7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.017598 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.029580 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.048039 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.051001 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.051039 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.051047 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.051059 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.051067 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.063974 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.075170 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.096111 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.110273 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.130268 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.152031 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.154272 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.154316 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.154329 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.154349 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.154366 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.165930 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.179708 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.194331 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.210400 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.224679 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.239571 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.258268 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.258976 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.258996 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.259003 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.259016 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.259024 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.274676 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:00Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.360758 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.360804 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.360815 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.360832 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.360845 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.365237 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:00 crc kubenswrapper[4606]: E0123 16:54:00.365374 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.378345 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:47:52.066829535 +0000 UTC Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.463598 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.463652 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.463662 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.463676 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.463685 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.566098 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.566137 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.566150 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.566168 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.566180 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.668480 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.668510 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.668520 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.668532 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.668540 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.771768 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.771820 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.771840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.771863 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.771880 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.873946 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.873973 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.873981 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.873994 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.874003 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.976658 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.976701 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.976716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.976736 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:00 crc kubenswrapper[4606]: I0123 16:54:00.976752 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:00Z","lastTransitionTime":"2026-01-23T16:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.010158 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/0.log" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.010212 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerStarted","Data":"e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.032717 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.050511 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.065647 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.079315 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.079356 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.079371 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.079390 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.079407 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.080304 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.092471 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.103795 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.118919 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.129468 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.147311 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.160568 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.171685 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.181699 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.181729 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.181738 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.181754 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.181766 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.185054 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.196973 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.208077 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.222861 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.232986 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.246626 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:01Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.285676 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.285722 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.285733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.285752 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.285765 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.365559 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.365610 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:01 crc kubenswrapper[4606]: E0123 16:54:01.365705 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.365749 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:01 crc kubenswrapper[4606]: E0123 16:54:01.365854 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:01 crc kubenswrapper[4606]: E0123 16:54:01.365934 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.379233 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:10:49.099085721 +0000 UTC Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.388144 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.388187 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.388202 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.388220 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.388236 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.490450 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.490491 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.490502 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.490516 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.490526 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.592480 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.592519 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.592529 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.592544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.592556 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.695618 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.695655 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.695665 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.695681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.695692 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.800886 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.800930 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.800944 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.800969 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.800981 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.903645 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.903729 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.903753 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.903783 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:01 crc kubenswrapper[4606]: I0123 16:54:01.903845 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:01Z","lastTransitionTime":"2026-01-23T16:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.006551 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.006627 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.006643 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.006664 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.006678 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.110125 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.110180 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.110202 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.110231 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.110252 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.212526 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.212583 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.212624 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.212647 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.212664 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.315560 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.315665 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.315692 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.315719 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.315739 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.365393 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:02 crc kubenswrapper[4606]: E0123 16:54:02.365565 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.379672 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:07:52.634839937 +0000 UTC Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.419199 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.419236 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.419244 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.419258 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.419266 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.521861 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.521899 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.521910 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.521925 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.521936 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.625078 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.625354 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.625417 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.625477 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.625569 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.727298 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.727331 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.727339 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.727351 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.727359 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.829533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.829561 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.829568 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.829582 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.829619 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.932024 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.932427 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.932666 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.932923 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:02 crc kubenswrapper[4606]: I0123 16:54:02.933133 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:02Z","lastTransitionTime":"2026-01-23T16:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.036859 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.036979 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.036999 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.037022 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.037039 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.140961 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.141056 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.141083 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.141113 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.141134 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.244485 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.244543 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.244581 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.244668 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.244694 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.347644 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.348044 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.348237 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.348448 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.348580 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.365102 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.365172 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:03 crc kubenswrapper[4606]: E0123 16:54:03.365273 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:03 crc kubenswrapper[4606]: E0123 16:54:03.365525 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.365808 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:03 crc kubenswrapper[4606]: E0123 16:54:03.365928 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.379835 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:05:03.17367392 +0000 UTC Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.452028 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.452097 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.452120 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.452146 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.452165 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.554844 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.554896 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.554914 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.554937 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.554954 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.658122 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.658626 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.658922 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.659136 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.659346 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.762048 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.762271 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.762364 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.762426 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.762490 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.864918 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.864947 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.864954 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.864966 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.864975 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.967739 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.967796 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.967812 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.967836 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:03 crc kubenswrapper[4606]: I0123 16:54:03.967854 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:03Z","lastTransitionTime":"2026-01-23T16:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.070160 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.070199 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.070211 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.070227 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.070237 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.172494 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.173061 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.173285 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.173475 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.173695 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.276159 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.276208 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.276217 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.276229 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.276284 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.364681 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:04 crc kubenswrapper[4606]: E0123 16:54:04.364880 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.378629 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.378678 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.378695 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.378717 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.378733 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.380794 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:10:53.513077024 +0000 UTC Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.488874 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.488939 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.488957 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.488980 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.488999 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.591724 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.591759 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.591767 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.591782 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.591791 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.694396 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.694428 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.694436 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.694448 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.694456 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.796392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.796444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.796456 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.796473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.796485 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.880112 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.880141 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.880149 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.880160 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.880169 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: E0123 16:54:04.892039 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.895544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.895572 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.895604 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.895621 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.895633 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: E0123 16:54:04.911638 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.915560 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.915620 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.915633 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.915651 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.915664 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: E0123 16:54:04.928268 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.931472 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.931506 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.931517 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.931532 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.931544 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: E0123 16:54:04.944569 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.947766 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.947802 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.947811 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.947823 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.947832 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:04 crc kubenswrapper[4606]: E0123 16:54:04.963951 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:04Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:04 crc kubenswrapper[4606]: E0123 16:54:04.964077 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.965026 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.965076 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.965085 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.965096 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:04 crc kubenswrapper[4606]: I0123 16:54:04.965105 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:04Z","lastTransitionTime":"2026-01-23T16:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.067341 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.067378 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.067392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.067407 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.067419 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.170379 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.170743 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.170937 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.171171 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.171433 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.273818 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.274049 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.274110 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.274176 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.274232 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.364692 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.364791 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:05 crc kubenswrapper[4606]: E0123 16:54:05.364980 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:05 crc kubenswrapper[4606]: E0123 16:54:05.364797 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.365023 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:05 crc kubenswrapper[4606]: E0123 16:54:05.365139 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.376334 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.376485 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.376572 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.376701 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.376802 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.381535 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:03:18.452039981 +0000 UTC Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.479807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.479843 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.479851 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.479866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.479877 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.582486 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.582760 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.582860 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.582934 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.583016 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.685752 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.686048 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.686112 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.686180 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.686235 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.789150 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.789459 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.789552 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.789652 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.789738 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.892265 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.892301 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.892311 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.892326 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.892337 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.994705 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.994752 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.994768 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.994789 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:05 crc kubenswrapper[4606]: I0123 16:54:05.994804 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:05Z","lastTransitionTime":"2026-01-23T16:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.097068 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.097172 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.097181 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.097196 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.097205 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.199267 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.199331 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.199351 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.199376 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.199395 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.301985 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.302023 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.302032 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.302045 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.302056 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.365100 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:06 crc kubenswrapper[4606]: E0123 16:54:06.365219 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.378890 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.381840 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:40:35.022253486 +0000 UTC Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.390880 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.404396 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.404681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.404799 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.404907 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.405021 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.405340 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.419263 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.430084 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.441050 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.452129 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.466127 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.478511 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.489710 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.499710 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.507850 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.507886 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.507896 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.507908 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.507918 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.511238 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.531815 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.543772 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.558337 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.576509 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.585951 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:06Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.610098 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.610139 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.610153 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.610173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.610187 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.713892 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.713941 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.713957 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.713980 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.713996 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.815857 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.815898 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.815909 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.815923 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.815931 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.919140 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.919474 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.919577 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.919711 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:06 crc kubenswrapper[4606]: I0123 16:54:06.919800 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:06Z","lastTransitionTime":"2026-01-23T16:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.022865 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.022918 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.022929 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.022949 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.022959 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.125817 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.125930 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.125957 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.125982 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.125999 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.229034 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.229118 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.229143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.229175 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.229200 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.331872 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.331902 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.331913 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.331928 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.331940 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.364933 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:07 crc kubenswrapper[4606]: E0123 16:54:07.365078 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.367333 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:07 crc kubenswrapper[4606]: E0123 16:54:07.371036 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.375861 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:07 crc kubenswrapper[4606]: E0123 16:54:07.376088 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.379319 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.382025 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:36:44.70089049 +0000 UTC Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.433896 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.433927 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.433936 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.433950 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.433959 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.536036 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.536073 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.536081 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.536094 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.536103 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.638942 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.638988 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.638999 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.639041 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.639055 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.742020 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.742067 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.742082 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.742104 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.742120 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.844718 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.844772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.844786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.844806 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.844820 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.947320 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.947391 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.947415 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.947444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:07 crc kubenswrapper[4606]: I0123 16:54:07.947464 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:07Z","lastTransitionTime":"2026-01-23T16:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.049656 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.049694 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.049705 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.049720 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.049731 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.151446 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.151485 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.151495 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.151510 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.151520 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.254364 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.254533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.254563 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.254629 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.254655 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.357649 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.357707 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.357723 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.357746 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.357762 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.364941 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:08 crc kubenswrapper[4606]: E0123 16:54:08.365215 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.377829 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.383000 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:19:06.896841013 +0000 UTC Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.460139 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.460204 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.460216 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.460232 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.460243 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.563018 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.563057 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.563066 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.563079 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.563088 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.666150 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.666231 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.666245 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.666260 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.666269 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.769175 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.769232 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.769248 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.769264 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.769275 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.871847 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.871901 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.871912 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.871925 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.871936 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.974539 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.974575 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.974599 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.974617 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:08 crc kubenswrapper[4606]: I0123 16:54:08.974626 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:08Z","lastTransitionTime":"2026-01-23T16:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.077205 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.077251 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.077268 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.077290 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.077304 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.180156 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.180201 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.180217 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.180238 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.180254 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.284103 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.284150 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.284166 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.284184 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.284196 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.365571 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.365722 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:09 crc kubenswrapper[4606]: E0123 16:54:09.365743 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:09 crc kubenswrapper[4606]: E0123 16:54:09.365884 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.365901 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:09 crc kubenswrapper[4606]: E0123 16:54:09.366438 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.367014 4606 scope.go:117] "RemoveContainer" containerID="5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.383544 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:21:04.761737764 +0000 UTC Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.387000 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.387040 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.387065 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.387088 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.387106 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.492229 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.492295 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.492339 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.492376 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.492403 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.595265 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.595285 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.595292 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.595306 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.595316 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.697938 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.698008 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.698020 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.698057 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.698069 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.800687 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.800733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.800741 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.800772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.800782 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.902825 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.902866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.902877 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.902912 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:09 crc kubenswrapper[4606]: I0123 16:54:09.902925 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:09Z","lastTransitionTime":"2026-01-23T16:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.004964 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.005002 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.005010 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.005024 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.005033 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.041072 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/2.log" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.043602 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.043992 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.067373 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19103493-8702-4122-9a81-136edb70d5d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e337358163bb5cedf5cfffdc2242c893467a7b1a84c375ce484384c25bcebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a20bc4ad715cef44b4f6d990a9a46abe569139af760cd8e3fe37a4372b7c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1fd297902bda7230b9dad3475cae4d39dca1101f4ef2a4141d84ddea1cb0a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e90b17e922b85243d8fdf76ec4f62a55261da6f90403c2693677984f71ef76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af49e939899edf31373245c5f88b0d777b89d9d3f72d712d9cbbb562d0db745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.079564 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.093055 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.106360 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.107445 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.107490 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.107499 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.107513 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.107521 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.119884 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.130104 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.143220 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.155157 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.165250 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abf87f8-d984-4fed-bc33-adcff49d2fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e945ec452bb84c6d6836d023451cb4b4ac41dfc3dd5ae4bf872ad0ed52c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.177621 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.189528 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.201952 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.209733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.209772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.209783 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.209800 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.209812 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.213998 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.228818 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.249160 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.262356 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.276117 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.287772 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.299226 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:10Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.311873 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.311908 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.311920 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.311934 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.311947 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.365522 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:10 crc kubenswrapper[4606]: E0123 16:54:10.365663 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.384715 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:06:48.736938429 +0000 UTC Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.414836 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.414873 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.414884 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.414899 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.414912 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.518070 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.518128 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.518143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.518163 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.518178 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.620733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.620774 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.620787 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.620802 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.620814 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.723544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.723613 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.723623 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.723638 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.723647 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.825928 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.826166 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.826177 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.826191 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.826203 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.929860 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.929931 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.929952 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.929983 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:10 crc kubenswrapper[4606]: I0123 16:54:10.930004 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:10Z","lastTransitionTime":"2026-01-23T16:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.032268 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.032318 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.032334 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.032358 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.032380 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.050402 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/3.log" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.051036 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/2.log" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.054413 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" exitCode=1 Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.054448 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.054487 4606 scope.go:117] "RemoveContainer" containerID="5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.055218 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.055439 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.081989 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19103493-8702-4122-9a81-136edb70d5d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e337358163bb5cedf5cfffdc2242c893467a7b1a84c375ce484384c25bcebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a20bc4ad715cef44b4f6d990a9a46abe569139af760cd8e3fe37a4372b7c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1fd297902bda7230b9dad3475cae4d39dca1101f4ef2a4141d84ddea1cb0a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e90b17e922b85243d8fdf76ec4f62a55261da6f90403c2693677984f71ef76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af49e939899edf31373245c5f88b0d777b89d9d3f72d712d9cbbb562d0db745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.094322 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.108957 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.125735 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.137082 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.137129 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.137141 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.137157 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.137168 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.143310 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.156193 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.168499 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.179565 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.190799 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.205530 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.221280 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abf87f8-d984-4fed-bc33-adcff49d2fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e945ec452bb84c6d6836d023451cb4b4ac41dfc3dd5ae4bf872ad0ed52c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.239180 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.239846 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.239885 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.239896 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.239915 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.239927 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.253236 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.265909 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.283556 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f5458ba33d5254918513e4394f86cbedb2092b552949a066a59b263cb92bb79\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:41Z\\\",\\\"message\\\":\\\":map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:53:41.211721 6141 services_controller.go:452] Built service openshift-marketplace/redhat-marketplace per-node LB for network=default: []services.LB{}\\\\nF0123 16:53:41.211731 6141 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:53:41Z is after 2025-08-24T17:21:41Z]\\\\nI0123 16:53:41.211740 6141 services_controller.go:453] Built service openshift-marketplace/redhat-marketplace template LB for network=default: []\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:54:10Z\\\",\\\"message\\\":\\\"ns:[]Condition{},},}\\\\nI0123 16:54:10.291273 6540 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:54:10.291652 6540 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291655 6540 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0123 16:54:10.291660 6540 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291667 6540 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qqwhl in node crc\\\\nF0123 16:54:10.291298 6540 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.295013 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.308805 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.320883 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.331424 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:11Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.342034 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.342074 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.342083 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.342097 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.342105 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.364815 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.364939 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.365098 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.365140 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.365235 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.365285 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.374329 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.374413 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.374453 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.374476 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.374491 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374636 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374654 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374664 4606 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374704 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.374692119 +0000 UTC m=+149.374884525 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374853 4606 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374910 4606 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374852 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374990 4606 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.375006 4606 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.374905 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.374871895 +0000 UTC m=+149.375064301 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.375034 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.375024929 +0000 UTC m=+149.375217335 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.375050 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.375041859 +0000 UTC m=+149.375234395 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 16:54:11 crc kubenswrapper[4606]: E0123 16:54:11.375073 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.37506388 +0000 UTC m=+149.375256416 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.385362 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:27:31.241138553 +0000 UTC Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.443829 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.443871 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.443883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.443898 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.443911 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.546634 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.546706 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.546731 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.546754 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.546772 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.648891 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.648927 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.648938 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.648953 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.648964 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.751525 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.751550 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.751559 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.751571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.751579 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.854411 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.854444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.854455 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.854469 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.854481 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.957228 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.957267 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.957277 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.957291 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:11 crc kubenswrapper[4606]: I0123 16:54:11.957299 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:11Z","lastTransitionTime":"2026-01-23T16:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.058737 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.058880 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.058898 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.058920 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.058937 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.059994 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/3.log" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.063508 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 16:54:12 crc kubenswrapper[4606]: E0123 16:54:12.063724 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.074836 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abf87f8-d984-4fed-bc33-adcff49d2fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e945ec452bb84c6d6836d023451cb4b4ac41dfc3dd5ae4bf872ad0ed52c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.090350 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.102075 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.114978 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.127195 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.139031 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.149705 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.161442 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.161474 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.161482 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.161494 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.161502 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.167452 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.185317 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:54:10Z\\\",\\\"message\\\":\\\"ns:[]Condition{},},}\\\\nI0123 16:54:10.291273 6540 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:54:10.291652 6540 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291655 6540 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0123 16:54:10.291660 6540 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291667 6540 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qqwhl in node crc\\\\nF0123 16:54:10.291298 6540 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:54:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.194539 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.206529 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.216726 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.225449 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.245329 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19103493-8702-4122-9a81-136edb70d5d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e337358163bb5cedf5cfffdc2242c893467a7b1a84c375ce484384c25bcebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a20bc4ad715cef44b4f6d990a9a46abe569139af760cd8e3fe37a4372b7c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1fd297902bda7230b9dad3475cae4d39dca1101f4ef2a4141d84ddea1cb0a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e90b17e922b85243d8fdf76ec4f62a55261da6f90403c2693677984f71ef76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af49e939899edf31373245c5f88b0d777b89d9d3f72d712d9cbbb562d0db745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.255834 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.264093 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.264121 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.264131 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.264143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.264153 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.270224 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.282809 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.294083 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.302507 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:12Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.365307 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:12 crc kubenswrapper[4606]: E0123 16:54:12.365928 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.366446 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.366497 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.366533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.366565 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.366635 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.386757 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:51:25.6609749 +0000 UTC Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.468939 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.468975 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.468985 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.468997 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.469006 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.570721 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.570968 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.571053 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.571169 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.571250 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.673425 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.673716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.673866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.674013 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.674141 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.777699 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.777746 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.777760 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.777779 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.777792 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.880430 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.880679 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.880690 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.880707 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.880718 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.983709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.983749 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.983763 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.983784 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:12 crc kubenswrapper[4606]: I0123 16:54:12.983798 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:12Z","lastTransitionTime":"2026-01-23T16:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.085662 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.085709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.085721 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.085737 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.085748 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.187824 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.187859 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.187868 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.187883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.187894 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.290451 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.290692 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.290800 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.290865 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.290931 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.365697 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.365697 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.365888 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:13 crc kubenswrapper[4606]: E0123 16:54:13.365978 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:13 crc kubenswrapper[4606]: E0123 16:54:13.366052 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:13 crc kubenswrapper[4606]: E0123 16:54:13.366212 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.387759 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:40:44.843088639 +0000 UTC Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.393747 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.393958 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.394705 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.394740 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.394752 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.497343 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.497379 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.497387 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.497418 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.497427 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.600411 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.600715 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.600779 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.600840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.600910 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.702436 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.702466 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.702475 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.702489 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.702500 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.805489 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.805536 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.805553 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.805576 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.805625 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.907955 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.908200 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.908309 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.908431 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:13 crc kubenswrapper[4606]: I0123 16:54:13.908515 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:13Z","lastTransitionTime":"2026-01-23T16:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.010606 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.010849 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.010911 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.010974 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.011027 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.113350 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.113406 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.113425 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.113449 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.113471 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.215464 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.215741 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.215826 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.215907 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.215991 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.319228 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.319740 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.319952 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.320134 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.320324 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.365120 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:14 crc kubenswrapper[4606]: E0123 16:54:14.365298 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.388207 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:08:34.717475814 +0000 UTC Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.423476 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.423541 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.423563 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.423666 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.423695 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.526378 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.526428 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.526465 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.526485 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.526497 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.632477 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.632533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.632550 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.632572 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.632677 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.734299 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.734446 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.734613 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.734769 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.734910 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.844160 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.844218 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.844237 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.844258 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.844276 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.947749 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.947786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.947794 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.947807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:14 crc kubenswrapper[4606]: I0123 16:54:14.947817 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:14Z","lastTransitionTime":"2026-01-23T16:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.050367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.050429 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.050448 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.050472 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.050489 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.153681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.153744 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.153759 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.153777 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.153808 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.214864 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.214921 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.214936 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.214956 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.214968 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.231388 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.236303 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.236346 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.236358 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.236375 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.236387 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.254552 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.259527 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.259634 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.259653 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.259705 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.259720 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.278229 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.282898 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.282969 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.282995 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.283027 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.283051 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.301930 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.306719 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.306762 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.306770 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.306785 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.306796 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.324333 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:15Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.324579 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.326491 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.326557 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.326577 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.326646 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.326671 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.364935 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.364940 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.365151 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.364941 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.365291 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:15 crc kubenswrapper[4606]: E0123 16:54:15.365359 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.389269 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:20:51.33196523 +0000 UTC Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.429082 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.429297 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.429435 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.429539 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.429688 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.532328 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.532395 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.532414 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.532439 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.532457 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.635298 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.635564 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.635694 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.635817 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.635905 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.738893 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.738940 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.738951 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.738967 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.738979 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.842410 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.842457 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.842467 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.842484 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.842493 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.945342 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.945408 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.945424 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.945447 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:15 crc kubenswrapper[4606]: I0123 16:54:15.945465 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:15Z","lastTransitionTime":"2026-01-23T16:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.048017 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.048087 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.048108 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.048132 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.048152 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.150323 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.150356 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.150365 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.150378 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.150386 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.253215 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.253280 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.253288 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.253301 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.253310 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.355987 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.356024 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.356035 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.356055 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.356065 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.365978 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:16 crc kubenswrapper[4606]: E0123 16:54:16.366275 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.376704 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.390144 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:15:17.200323986 +0000 UTC Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.392216 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.403876 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.415015 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.437783 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19103493-8702-4122-9a81-136edb70d5d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e337358163bb5cedf5cfffdc2242c893467a7b1a84c375ce484384c25bcebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a20bc4ad715cef44b4f6d990a9a46abe569139af760cd8e3fe37a4372b7c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1fd297902bda7230b9dad3475cae4d39dca1101f4ef2a4141d84ddea1cb0a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e90b17e922b85243d8fdf76ec4f62a55261da6f90403c2693677984f71ef76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af49e939899edf31373245c5f88b0d777b89d9d3f72d712d9cbbb562d0db745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.447804 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.456840 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.459204 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.459229 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.459238 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.459252 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.459260 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.469225 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.481569 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.494452 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.508085 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.521319 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.534455 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.547657 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.561725 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.562176 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.562214 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.562223 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.562237 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.562248 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.571934 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abf87f8-d984-4fed-bc33-adcff49d2fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e945ec452bb84c6d6836d023451cb4b4ac41dfc3dd5ae4bf872ad0ed52c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.584710 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.608703 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:54:10Z\\\",\\\"message\\\":\\\"ns:[]Condition{},},}\\\\nI0123 16:54:10.291273 6540 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:54:10.291652 6540 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291655 6540 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0123 16:54:10.291660 6540 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291667 6540 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qqwhl in node crc\\\\nF0123 16:54:10.291298 6540 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:54:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.621887 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:16Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.665143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.665189 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.665197 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.665212 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.665221 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.767357 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.767394 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.767405 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.767419 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.767432 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.869778 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.870170 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.870183 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.870200 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.870213 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.973571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.973695 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.973744 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.973779 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:16 crc kubenswrapper[4606]: I0123 16:54:16.973823 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:16Z","lastTransitionTime":"2026-01-23T16:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.076905 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.076976 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.076998 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.077026 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.077047 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.179838 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.179908 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.179929 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.180459 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.180540 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.282809 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.282848 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.282864 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.282878 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.282888 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.364726 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.364784 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.364726 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:17 crc kubenswrapper[4606]: E0123 16:54:17.364869 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:17 crc kubenswrapper[4606]: E0123 16:54:17.364987 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:17 crc kubenswrapper[4606]: E0123 16:54:17.365081 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.385521 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.385552 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.385562 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.385575 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.385614 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.390986 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:59:37.206533712 +0000 UTC Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.488574 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.488632 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.488643 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.488660 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.488671 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.591434 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.591500 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.591513 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.591533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.591545 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.694488 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.694554 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.694567 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.694609 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.694626 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.797807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.797883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.797898 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.797918 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.797935 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.905544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.906478 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.906714 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.906895 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:17 crc kubenswrapper[4606]: I0123 16:54:17.907028 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:17Z","lastTransitionTime":"2026-01-23T16:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.010514 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.010562 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.010578 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.010624 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.010642 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.115109 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.115162 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.115184 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.115212 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.115234 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.218053 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.218094 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.218108 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.218127 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.218140 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.321797 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.322692 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.322857 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.323067 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.323223 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.364771 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:18 crc kubenswrapper[4606]: E0123 16:54:18.365373 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.391815 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:23:45.504485779 +0000 UTC Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.425825 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.425905 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.425923 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.425948 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.425967 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.528739 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.528810 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.528833 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.528861 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.528878 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.631642 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.631711 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.631732 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.631755 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.631771 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.735168 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.735521 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.735544 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.735569 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.735619 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.838889 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.838967 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.838979 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.839001 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.839014 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.941169 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.941219 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.941232 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.941250 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:18 crc kubenswrapper[4606]: I0123 16:54:18.941262 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:18Z","lastTransitionTime":"2026-01-23T16:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.044635 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.044725 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.044740 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.044755 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.044767 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.148218 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.148257 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.148268 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.148283 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.148294 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.251616 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.251667 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.251678 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.251696 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.251711 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.354951 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.355017 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.355035 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.355058 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.355075 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.365647 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.365741 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.365644 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:19 crc kubenswrapper[4606]: E0123 16:54:19.365799 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:19 crc kubenswrapper[4606]: E0123 16:54:19.365911 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:19 crc kubenswrapper[4606]: E0123 16:54:19.366065 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.392254 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:51:45.280853099 +0000 UTC Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.457904 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.457951 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.457967 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.457990 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.458007 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.560984 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.561078 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.561113 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.561145 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.561182 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.663884 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.663929 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.663938 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.663952 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.663986 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.765943 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.765984 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.766020 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.766038 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.766049 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.868019 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.868064 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.868077 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.868091 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.868101 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.971550 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.971680 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.971697 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.972287 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:19 crc kubenswrapper[4606]: I0123 16:54:19.972355 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:19Z","lastTransitionTime":"2026-01-23T16:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.075990 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.076043 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.076055 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.076073 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.076084 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.178303 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.178348 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.178367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.178384 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.178395 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.280630 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.280699 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.280721 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.280749 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.280771 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.364995 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:20 crc kubenswrapper[4606]: E0123 16:54:20.365203 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.383528 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.383571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.383584 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.383628 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.383639 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.392759 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:36:26.575453246 +0000 UTC Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.486262 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.486299 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.486309 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.486330 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.486341 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.588963 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.588996 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.589004 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.589016 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.589024 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.691071 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.691138 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.691160 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.691188 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.691210 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.794779 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.794842 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.794858 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.794883 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.794900 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.897465 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.897545 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.897572 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.897642 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:20 crc kubenswrapper[4606]: I0123 16:54:20.897666 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:20Z","lastTransitionTime":"2026-01-23T16:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.000681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.000745 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.000760 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.000786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.000804 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.102872 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.102930 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.102948 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.102971 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.102992 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.206966 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.207021 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.207043 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.207070 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.207093 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.310270 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.310333 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.310356 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.310381 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.310398 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.365318 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.365939 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:21 crc kubenswrapper[4606]: E0123 16:54:21.366016 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.365957 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:21 crc kubenswrapper[4606]: E0123 16:54:21.366090 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:21 crc kubenswrapper[4606]: E0123 16:54:21.366200 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.393657 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:39:28.912834409 +0000 UTC Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.412508 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.412546 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.412558 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.412573 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.412600 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.515306 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.515350 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.515361 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.515376 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.515387 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.618575 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.618681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.618704 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.618731 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.618748 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.721486 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.721529 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.721540 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.721554 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.721564 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.824472 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.824514 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.824526 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.824541 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.824553 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.927785 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.927825 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.927834 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.927849 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:21 crc kubenswrapper[4606]: I0123 16:54:21.927860 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:21Z","lastTransitionTime":"2026-01-23T16:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.030147 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.030200 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.030218 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.030240 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.030257 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.133892 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.133946 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.133962 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.133991 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.134009 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.237203 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.237249 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.237263 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.237288 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.237301 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.339571 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.339670 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.339684 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.339706 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.339720 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.365308 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:22 crc kubenswrapper[4606]: E0123 16:54:22.365446 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.394639 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 10:36:00.938547052 +0000 UTC Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.443248 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.443289 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.443304 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.443324 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.443338 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.546776 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.546832 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.546854 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.546882 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.546902 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.649666 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.649708 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.649724 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.649745 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.649760 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.752536 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.752640 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.752659 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.752683 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.752699 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.857056 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.857136 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.857159 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.857190 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.857211 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.959874 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.959953 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.959972 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.959997 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:22 crc kubenswrapper[4606]: I0123 16:54:22.960015 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:22Z","lastTransitionTime":"2026-01-23T16:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.062670 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.062744 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.062769 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.062799 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.062820 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.165665 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.165716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.165734 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.165756 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.165771 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.268669 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.268738 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.268761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.268788 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.268812 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.365317 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:23 crc kubenswrapper[4606]: E0123 16:54:23.365441 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.365517 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.365563 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:23 crc kubenswrapper[4606]: E0123 16:54:23.365801 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:23 crc kubenswrapper[4606]: E0123 16:54:23.366131 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.372533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.372583 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.372622 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.372643 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.372662 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.395249 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:39:09.8401994 +0000 UTC Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.475458 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.475530 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.475554 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.475584 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.475716 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.584546 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.584650 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.584673 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.584701 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.584720 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.686807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.686845 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.686852 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.686866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.686874 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.789294 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.789354 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.789366 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.789383 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.789394 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.892021 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.892089 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.892111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.892141 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.892180 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.996330 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.996370 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.996382 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.996400 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:23 crc kubenswrapper[4606]: I0123 16:54:23.996412 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:23Z","lastTransitionTime":"2026-01-23T16:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.099286 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.099330 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.099342 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.099356 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.099364 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.201171 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.201239 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.201257 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.201280 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.201297 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.304698 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.304761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.304783 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.304807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.304823 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.365126 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:24 crc kubenswrapper[4606]: E0123 16:54:24.365289 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.395398 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:19:55.66382301 +0000 UTC Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.408230 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.408299 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.408313 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.408330 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.408345 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.510772 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.510840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.510852 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.510870 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.510881 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.613853 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.613904 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.613919 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.613937 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.613964 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.717188 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.717278 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.717295 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.717322 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.717341 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.821118 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.821864 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.822126 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.822315 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.822507 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.925518 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.925581 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.925653 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.925686 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:24 crc kubenswrapper[4606]: I0123 16:54:24.925863 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:24Z","lastTransitionTime":"2026-01-23T16:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.028670 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.028716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.028726 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.028739 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.028748 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.131367 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.131472 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.131538 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.131564 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.131653 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.234126 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.234190 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.234207 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.234231 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.234248 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.337794 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.337857 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.337879 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.337912 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.337934 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.365295 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.365360 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.365300 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.365452 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.365675 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.365791 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.396203 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:58:58.190163907 +0000 UTC Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.441146 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.441218 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.441230 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.441254 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.441271 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.544827 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.544920 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.544939 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.544961 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.544978 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.629706 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.629770 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.629788 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.629814 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.629832 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.650657 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.656806 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.656857 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.656906 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.656932 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.656949 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.676632 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.681951 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.682264 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.682430 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.682693 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.682885 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.703531 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.709084 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.709151 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.709165 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.709212 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.709230 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.728354 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.738472 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.738521 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.738535 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.738556 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.738624 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.756458 4606 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a4ca771-bbec-40fb-95a6-5cc937c9c7f8\\\",\\\"systemUUID\\\":\\\"843df5bf-5299-4274-8c79-9400b122a934\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:25Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:25 crc kubenswrapper[4606]: E0123 16:54:25.756697 4606 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.758442 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.758497 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.758513 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.758533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.758550 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.861761 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.861980 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.862009 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.862042 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.862068 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.964725 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.964774 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.964787 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.964804 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:25 crc kubenswrapper[4606]: I0123 16:54:25.964817 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:25Z","lastTransitionTime":"2026-01-23T16:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.067694 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.067725 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.067734 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.067750 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.067760 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.170350 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.170422 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.170435 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.170455 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.170468 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.273355 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.273390 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.273400 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.273415 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.273426 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.365390 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:26 crc kubenswrapper[4606]: E0123 16:54:26.366870 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.367671 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 16:54:26 crc kubenswrapper[4606]: E0123 16:54:26.367882 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.376320 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.376364 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.376381 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.376402 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.376420 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.389028 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.397237 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:39:49.134436179 +0000 UTC Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.406839 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p4c8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a063622-dd68-4217-89ac-f220683a1b73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cda295f3f7f89cdc963dbf41936dbb3c5eefea3d9542b29c8c06c7ddb222ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tzlv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p4c8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.428929 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a11ff38ffc8c1bbb297d69bc729394d5c80e23c347cce29f3604091ae6253e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.447916 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f6002ad-cfb9-4ace-a3af-69d16e0a930d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcd9f0d6640f01bb9dd01eb0ae616c15475af5f5e4a8bf8d83f32950d92b5b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://960decb6796863d62d6e8bbedddee45f9a1480ce1f7dc9cd17dd07a57837cae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d26f11973f8bbb904852bca424c7bcb04d9854f66290276e05ceed79aae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea981139b54887528cc0d397f970a88a6107e5176e9df8cb8a2f2137738f8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.465000 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb934e2-af1b-473c-ae3a-061098141a23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76dd35929bfd277c644748bddc174d64f5fda7a08c0eb6b3c879c62d7d58ef39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e55898c0677b92ca8dcd4e764ab3cf1c4b605a057574f47ddf2820773938458c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8xsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nntkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.478902 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.478964 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.478990 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.479020 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.479045 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.499737 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19103493-8702-4122-9a81-136edb70d5d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e337358163bb5cedf5cfffdc2242c893467a7b1a84c375ce484384c25bcebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a20bc4ad715cef44b4f6d990a9a46abe569139af760cd8e3fe37a4372b7c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1fd297902bda7230b9dad3475cae4d39dca1101f4ef2a4141d84ddea1cb0a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e90b17e922b85243d8fdf76ec4f62a55261da6f90403c2693677984f71ef76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4af49e939899edf31373245c5f88b0d777b89d9d3f72d712d9cbbb562d0db745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://799492280bfbab8fb0517ef31344110dd0bf6c6f9d094af7d6fa3b8593fd9eea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02319858c43d679416d466fd3985679703b5991cce323851add570c2b34d56a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9692da94421a68c6b4d5ca442a917d639f233c8dd3ea5099aae555469f334c2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.524081 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6a94b12-97b4-4dc5-aeea-7ad7d33df65b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01472aab8c10837cc0e6f6952de409f201741170c85dd371b3148908a2b2dcbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc8fa41c444cb89634fe28b45189b531446becd50c3c4d68ccc01f077d24f5ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9288fd4a2282d110a76acdd9acfe10d6de236432d1acf6d21f4bf22a2287b9b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51216d0a8e9d80419e9d1aafc7493ce979bebd26306dd113bbfd8beba38a2402\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46182fe31320f0809516489c92086bdfc8848b05e6d61f64311d30866e24845b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6a23464daca05e013cdcd819393999b0411c7d11dafb1bc701a01e0ce2db458\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://134d2d59d5c2bd275b6df7f5cf62e47c1044bee3703ccd06a4dddff2ec804eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r67jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjtjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.544120 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wc86k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa611a0a-e12f-403c-8cd6-666972207315\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2cf4579ce544ab371e72864ef70d710c1323fe6ec7707b79d6aec19ee21eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xlsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wc86k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.565089 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5390920c65529e92e654f06af2c2490fd5e0cc61b8188d21856513562fe2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.577796 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5662d39c-4ee4-428e-8af8-9696d3a45d7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T16:53:06Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 16:52:59.759347 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 16:52:59.842173 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1655776235/tls.crt::/tmp/serving-cert-1655776235/tls.key\\\\\\\"\\\\nI0123 16:53:06.882155 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 16:53:06.886499 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 16:53:06.886541 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 16:53:06.886651 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 16:53:06.886686 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 16:53:06.897811 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0123 16:53:06.897838 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 16:53:06.897879 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897895 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 16:53:06.897908 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 16:53:06.897917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 16:53:06.897928 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 16:53:06.897936 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 16:53:06.900552 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.580492 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.580530 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.580542 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.580557 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.580566 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.588494 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06af028d-1458-4d49-8cd4-dccbc84a469c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdd354c5bc11778b8c407aa7c9f996ee4c431839d6823dcdad92b8855d2a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c64d435f1b147fd7d914909988065557c2bd3309d0565391fe788a67f2d0de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7894a11c4c9859559472fbae0ec0fb74afbc11e3580e50c3d5c1465f7bb3177b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.600616 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.610724 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.620989 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846abd3645d34fa7f1d6227fdd26fd48a7df9a6cbc57b85790b9e7e444041b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.629433 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8ea72e8-0104-491a-9ccc-49199db0d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3830172cf5194047cb65ed8bd5167e8864f6fa898be98bd5cd7922562e97284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvdz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m9sb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.640826 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ftp8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2692a882-403f-4842-b756-53ed7a38fa7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:53:59Z\\\",\\\"message\\\":\\\"2026-01-23T16:53:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a\\\\n2026-01-23T16:53:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_550d174a-1f43-4687-9028-f98e8740880a to /host/opt/cni/bin/\\\\n2026-01-23T16:53:14Z [verbose] multus-daemon started\\\\n2026-01-23T16:53:14Z [verbose] Readiness Indicator file check\\\\n2026-01-23T16:53:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llgnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ftp8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.649231 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1abf87f8-d984-4fed-bc33-adcff49d2fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:52:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e945ec452bb84c6d6836d023451cb4b4ac41dfc3dd5ae4bf872ad0ed52c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:52:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abd0eb5bd02acec896a810cd8d81345f15c22fdfd133e5a62742d708f34cdfe0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:52:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:52:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:52:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.659167 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tcnkr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qqwhl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.680111 4606 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f37a36-f502-425c-a9e5-035bf475742b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T16:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T16:54:10Z\\\",\\\"message\\\":\\\"ns:[]Condition{},},}\\\\nI0123 16:54:10.291273 6540 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 16:54:10.291652 6540 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291655 6540 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI0123 16:54:10.291660 6540 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-qqwhl\\\\nI0123 16:54:10.291667 6540 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-qqwhl in node crc\\\\nF0123 16:54:10.291298 6540 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T16:54:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T16:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T16:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wnghd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T16:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-j9tqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T16:54:26Z is after 2025-08-24T17:21:41Z" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.682353 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.682394 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.682402 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.682417 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.682428 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.785221 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.785255 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.785264 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.785277 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.785286 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.887709 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.887776 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.887814 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.887847 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.887869 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.991042 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.991092 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.991107 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.991128 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:26 crc kubenswrapper[4606]: I0123 16:54:26.991141 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:26Z","lastTransitionTime":"2026-01-23T16:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.094960 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.094996 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.095007 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.095022 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.095034 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.197466 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.197533 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.197542 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.197556 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.197567 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.301279 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.301340 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.301357 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.301385 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.301408 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.365280 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:27 crc kubenswrapper[4606]: E0123 16:54:27.365476 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.365482 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.365677 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:27 crc kubenswrapper[4606]: E0123 16:54:27.366012 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:27 crc kubenswrapper[4606]: E0123 16:54:27.366504 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.398150 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:25:41.473039176 +0000 UTC Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.404580 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.404665 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.404688 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.404718 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.404740 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.507624 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.507751 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.507771 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.507794 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.507811 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.611043 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.611099 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.611119 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.611144 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.611161 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.714245 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.714305 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.714322 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.714348 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.714366 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.818143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.818281 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.818304 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.818328 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.818389 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.923319 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.923382 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.923399 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.923425 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:27 crc kubenswrapper[4606]: I0123 16:54:27.923441 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:27Z","lastTransitionTime":"2026-01-23T16:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.025660 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.025704 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.025717 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.025733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.025744 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.128866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.129069 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.129087 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.129111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.129128 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.232011 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.232069 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.232087 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.232112 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.232130 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.250021 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:28 crc kubenswrapper[4606]: E0123 16:54:28.250210 4606 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:54:28 crc kubenswrapper[4606]: E0123 16:54:28.250321 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs podName:bae240c5-8926-4fc6-b0d9-fdf64bacecfe nodeName:}" failed. No retries permitted until 2026-01-23 16:55:32.250289824 +0000 UTC m=+166.250482270 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs") pod "network-metrics-daemon-qqwhl" (UID: "bae240c5-8926-4fc6-b0d9-fdf64bacecfe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.334613 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.334690 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.334708 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.334737 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.334765 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.365262 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:28 crc kubenswrapper[4606]: E0123 16:54:28.365384 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.399329 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:23:45.924960979 +0000 UTC Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.437841 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.437897 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.437914 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.437939 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.437999 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.542164 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.542216 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.542227 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.542244 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.542256 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.645178 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.645216 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.645227 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.645242 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.645254 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.746861 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.746920 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.746929 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.746944 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.746953 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.849866 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.849899 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.849907 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.849919 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.849928 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.952560 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.952614 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.952623 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.952637 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:28 crc kubenswrapper[4606]: I0123 16:54:28.952646 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:28Z","lastTransitionTime":"2026-01-23T16:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.055480 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.055515 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.055526 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.055541 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.055551 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.158068 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.158144 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.158169 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.158207 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.158276 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.261564 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.261679 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.261702 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.261728 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.261746 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.364774 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.364810 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:29 crc kubenswrapper[4606]: E0123 16:54:29.364935 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.365103 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.365143 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.365164 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.365192 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.365215 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.365190 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:29 crc kubenswrapper[4606]: E0123 16:54:29.365540 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:29 crc kubenswrapper[4606]: E0123 16:54:29.365912 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.399471 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:29:48.122679612 +0000 UTC Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.472167 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.472241 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.472261 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.472286 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.472304 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.575440 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.575918 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.576087 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.576241 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.576364 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.680027 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.680141 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.680154 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.680172 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.680184 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.782941 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.782986 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.783000 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.783017 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.783030 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.886088 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.886482 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.886703 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.886859 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.887006 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.990658 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.990719 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.990740 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.990769 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:29 crc kubenswrapper[4606]: I0123 16:54:29.990791 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:29Z","lastTransitionTime":"2026-01-23T16:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.093344 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.093381 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.093392 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.093408 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.093418 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.196090 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.196166 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.196190 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.196219 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.196245 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.298255 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.298648 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.298842 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.298993 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.299132 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.365061 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:30 crc kubenswrapper[4606]: E0123 16:54:30.365654 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.399726 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:03:13.115475031 +0000 UTC Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.402781 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.402821 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.402830 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.402845 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.402857 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.506444 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.506547 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.506567 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.506665 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.506686 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.609854 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.609909 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.609927 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.609949 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.609966 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.713106 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.713186 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.713209 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.713240 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.713261 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.815813 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.815878 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.815901 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.815929 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.815952 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.919535 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.919618 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.919645 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.919696 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:30 crc kubenswrapper[4606]: I0123 16:54:30.919718 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:30Z","lastTransitionTime":"2026-01-23T16:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.023213 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.025763 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.025837 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.025868 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.025891 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.129620 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.129655 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.129666 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.129681 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.129692 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.231934 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.231996 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.232014 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.232040 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.232059 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.336226 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.336300 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.336402 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.336436 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.336461 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.365173 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.365305 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.365207 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:31 crc kubenswrapper[4606]: E0123 16:54:31.365432 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:31 crc kubenswrapper[4606]: E0123 16:54:31.365499 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:31 crc kubenswrapper[4606]: E0123 16:54:31.365662 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.400767 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:37:44.992273665 +0000 UTC Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.438436 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.438473 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.438489 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.438534 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.438548 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.540205 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.540245 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.540256 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.540272 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.540281 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.643407 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.643476 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.643495 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.643520 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.643539 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.746347 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.746454 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.746471 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.746493 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.746508 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.849383 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.849419 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.849428 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.849439 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.849448 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.951942 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.951991 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.952002 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.952019 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:31 crc kubenswrapper[4606]: I0123 16:54:31.952031 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:31Z","lastTransitionTime":"2026-01-23T16:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.055090 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.055145 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.055161 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.055182 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.055201 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.158984 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.159075 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.159091 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.159111 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.159126 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.262680 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.262763 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.262786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.262823 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.262847 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.364707 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:32 crc kubenswrapper[4606]: E0123 16:54:32.364927 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.366518 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.366630 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.366650 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.366674 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.366693 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.401267 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:01:21.248157743 +0000 UTC Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.470173 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.470287 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.470312 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.470344 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.470368 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.574298 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.574330 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.574339 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.574359 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.574370 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.676904 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.676958 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.676969 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.676985 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.676996 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.780169 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.780244 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.780264 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.780289 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.780307 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.883893 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.883951 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.883959 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.883976 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.883987 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.987644 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.987756 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.987782 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.987814 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:32 crc kubenswrapper[4606]: I0123 16:54:32.987835 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:32Z","lastTransitionTime":"2026-01-23T16:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.091017 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.091089 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.091107 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.091127 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.091147 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.193985 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.194031 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.194042 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.194059 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.194071 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.298702 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.298769 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.298787 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.298809 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.298830 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.365437 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.365566 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:33 crc kubenswrapper[4606]: E0123 16:54:33.365756 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.365820 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:33 crc kubenswrapper[4606]: E0123 16:54:33.366011 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:33 crc kubenswrapper[4606]: E0123 16:54:33.366095 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.401665 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:12:38.702674191 +0000 UTC Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.401688 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.401776 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.401807 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.401840 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.401865 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.504855 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.504918 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.504934 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.504958 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.504975 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.608370 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.608418 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.608433 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.608456 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.608473 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.719961 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.720129 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.720214 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.720246 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.720325 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.824118 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.824175 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.824192 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.824216 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.824233 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.926512 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.926692 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.926758 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.926786 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:33 crc kubenswrapper[4606]: I0123 16:54:33.926807 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:33Z","lastTransitionTime":"2026-01-23T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.030700 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.030767 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.030784 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.030806 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.030822 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.134190 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.135410 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.135476 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.135514 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.135659 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.238716 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.238766 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.238778 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.238793 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.238805 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.342046 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.342124 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.342149 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.342177 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.342201 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.365412 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:34 crc kubenswrapper[4606]: E0123 16:54:34.365687 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.402818 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:06:10.349744185 +0000 UTC Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.444330 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.444388 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.444400 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.444416 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.444430 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.547343 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.547388 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.547405 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.547422 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.547435 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.650482 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.650538 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.650553 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.650574 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.650616 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.754176 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.754247 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.754265 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.754306 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.754347 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.857422 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.857495 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.857514 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.857539 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.857557 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.960282 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.960338 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.960348 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.960765 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:34 crc kubenswrapper[4606]: I0123 16:54:34.960838 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:34Z","lastTransitionTime":"2026-01-23T16:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.064930 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.064994 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.065011 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.065162 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.065182 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.170941 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.170992 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.171006 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.171023 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.171042 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.274734 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.274798 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.274815 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.274839 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.274856 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.365477 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.365544 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.365580 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:35 crc kubenswrapper[4606]: E0123 16:54:35.365671 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:35 crc kubenswrapper[4606]: E0123 16:54:35.365729 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:35 crc kubenswrapper[4606]: E0123 16:54:35.365815 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.377074 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.377116 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.377126 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.377140 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.377148 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.403724 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:42:45.659593634 +0000 UTC Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.479691 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.479726 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.479733 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.479745 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.479754 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.582678 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.582747 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.582763 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.582785 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.582801 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.685789 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.685861 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.685878 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.685902 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.685920 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.768023 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.768067 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.768075 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.768088 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.768099 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T16:54:35Z","lastTransitionTime":"2026-01-23T16:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.825807 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw"] Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.826207 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.827794 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.828269 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.828411 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.829124 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.874787 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p4c8f" podStartSLOduration=87.874772413 podStartE2EDuration="1m27.874772413s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:35.874045863 +0000 UTC m=+109.874238269" watchObservedRunningTime="2026-01-23 16:54:35.874772413 +0000 UTC m=+109.874964819" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.933043 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c2db6c7-a804-4623-a5f4-84dc74b73653-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.933153 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0c2db6c7-a804-4623-a5f4-84dc74b73653-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.933282 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2db6c7-a804-4623-a5f4-84dc74b73653-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.933376 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2db6c7-a804-4623-a5f4-84dc74b73653-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.933430 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0c2db6c7-a804-4623-a5f4-84dc74b73653-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.942809 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=28.94278834 podStartE2EDuration="28.94278834s" podCreationTimestamp="2026-01-23 16:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:35.921801549 +0000 UTC m=+109.921993955" watchObservedRunningTime="2026-01-23 16:54:35.94278834 +0000 UTC m=+109.942980746" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.970822 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.970800559 podStartE2EDuration="57.970800559s" podCreationTimestamp="2026-01-23 16:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:35.94990928 +0000 UTC m=+109.950101686" watchObservedRunningTime="2026-01-23 16:54:35.970800559 +0000 UTC m=+109.970992985" Jan 23 16:54:35 crc kubenswrapper[4606]: I0123 16:54:35.971296 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nntkz" podStartSLOduration=86.971289062 podStartE2EDuration="1m26.971289062s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:35.97007286 +0000 UTC m=+109.970265286" watchObservedRunningTime="2026-01-23 16:54:35.971289062 +0000 UTC m=+109.971481478" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.022081 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rjtjv" podStartSLOduration=88.022059489 podStartE2EDuration="1m28.022059489s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:36.009894083 +0000 UTC m=+110.010086489" watchObservedRunningTime="2026-01-23 16:54:36.022059489 +0000 UTC m=+110.022251905" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.034663 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c2db6c7-a804-4623-a5f4-84dc74b73653-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.034718 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0c2db6c7-a804-4623-a5f4-84dc74b73653-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.034782 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2db6c7-a804-4623-a5f4-84dc74b73653-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.034819 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2db6c7-a804-4623-a5f4-84dc74b73653-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.034848 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0c2db6c7-a804-4623-a5f4-84dc74b73653-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.034918 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0c2db6c7-a804-4623-a5f4-84dc74b73653-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.034917 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wc86k" podStartSLOduration=88.034897462 podStartE2EDuration="1m28.034897462s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:36.022407858 +0000 UTC m=+110.022600274" watchObservedRunningTime="2026-01-23 16:54:36.034897462 +0000 UTC m=+110.035089868" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.035261 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0c2db6c7-a804-4623-a5f4-84dc74b73653-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.036265 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2db6c7-a804-4623-a5f4-84dc74b73653-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.047086 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2db6c7-a804-4623-a5f4-84dc74b73653-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.049634 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c2db6c7-a804-4623-a5f4-84dc74b73653-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fzxhw\" (UID: \"0c2db6c7-a804-4623-a5f4-84dc74b73653\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.057341 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podStartSLOduration=88.057325871 podStartE2EDuration="1m28.057325871s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:36.057074634 +0000 UTC m=+110.057267040" watchObservedRunningTime="2026-01-23 16:54:36.057325871 +0000 UTC m=+110.057518277" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.084078 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.084060895 podStartE2EDuration="28.084060895s" podCreationTimestamp="2026-01-23 16:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:36.08384923 +0000 UTC m=+110.084041646" watchObservedRunningTime="2026-01-23 16:54:36.084060895 +0000 UTC m=+110.084253301" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.084220 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ftp8l" podStartSLOduration=88.084215659 podStartE2EDuration="1m28.084215659s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:36.073361259 +0000 UTC m=+110.073553685" watchObservedRunningTime="2026-01-23 16:54:36.084215659 +0000 UTC m=+110.084408065" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.114242 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.114228532 podStartE2EDuration="1m29.114228532s" podCreationTimestamp="2026-01-23 16:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:36.100096804 +0000 UTC m=+110.100289220" watchObservedRunningTime="2026-01-23 16:54:36.114228532 +0000 UTC m=+110.114420938" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.114499 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.114493349 podStartE2EDuration="1m23.114493349s" podCreationTimestamp="2026-01-23 16:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:36.114202831 +0000 UTC m=+110.114395247" watchObservedRunningTime="2026-01-23 16:54:36.114493349 +0000 UTC m=+110.114685755" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.143094 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.365412 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:36 crc kubenswrapper[4606]: E0123 16:54:36.367341 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.404333 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:43:19.784829704 +0000 UTC Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.404438 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 16:54:36 crc kubenswrapper[4606]: I0123 16:54:36.414968 4606 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 16:54:37 crc kubenswrapper[4606]: I0123 16:54:37.148533 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" event={"ID":"0c2db6c7-a804-4623-a5f4-84dc74b73653","Type":"ContainerStarted","Data":"c030594d5468230daed58a626044741dd3d306c605c71af17bdec86aee0e28d5"} Jan 23 16:54:37 crc kubenswrapper[4606]: I0123 16:54:37.148616 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" event={"ID":"0c2db6c7-a804-4623-a5f4-84dc74b73653","Type":"ContainerStarted","Data":"e4708916741bf9cb3f730d0c068b33dcce4af043bc6eeb5a5af55f813d651deb"} Jan 23 16:54:37 crc kubenswrapper[4606]: I0123 16:54:37.167052 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fzxhw" podStartSLOduration=89.167032296 podStartE2EDuration="1m29.167032296s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:37.16645222 +0000 UTC m=+111.166644626" watchObservedRunningTime="2026-01-23 16:54:37.167032296 +0000 UTC m=+111.167224702" Jan 23 16:54:37 crc kubenswrapper[4606]: I0123 16:54:37.365101 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:37 crc kubenswrapper[4606]: I0123 16:54:37.365136 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:37 crc kubenswrapper[4606]: I0123 16:54:37.365101 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:37 crc kubenswrapper[4606]: E0123 16:54:37.365228 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:37 crc kubenswrapper[4606]: E0123 16:54:37.365331 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:37 crc kubenswrapper[4606]: E0123 16:54:37.365673 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:38 crc kubenswrapper[4606]: I0123 16:54:38.365430 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:38 crc kubenswrapper[4606]: E0123 16:54:38.365616 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:39 crc kubenswrapper[4606]: I0123 16:54:39.364920 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:39 crc kubenswrapper[4606]: I0123 16:54:39.365001 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:39 crc kubenswrapper[4606]: E0123 16:54:39.365052 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:39 crc kubenswrapper[4606]: I0123 16:54:39.365148 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:39 crc kubenswrapper[4606]: E0123 16:54:39.365537 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:39 crc kubenswrapper[4606]: E0123 16:54:39.365754 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:39 crc kubenswrapper[4606]: I0123 16:54:39.365915 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 16:54:39 crc kubenswrapper[4606]: E0123 16:54:39.366092 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-j9tqd_openshift-ovn-kubernetes(a0f37a36-f502-425c-a9e5-035bf475742b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" Jan 23 16:54:40 crc kubenswrapper[4606]: I0123 16:54:40.364855 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:40 crc kubenswrapper[4606]: E0123 16:54:40.364987 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:41 crc kubenswrapper[4606]: I0123 16:54:41.365624 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:41 crc kubenswrapper[4606]: E0123 16:54:41.366363 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:41 crc kubenswrapper[4606]: I0123 16:54:41.365754 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:41 crc kubenswrapper[4606]: E0123 16:54:41.366642 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:41 crc kubenswrapper[4606]: I0123 16:54:41.365686 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:41 crc kubenswrapper[4606]: E0123 16:54:41.366909 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:42 crc kubenswrapper[4606]: I0123 16:54:42.365785 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:42 crc kubenswrapper[4606]: E0123 16:54:42.365996 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:43 crc kubenswrapper[4606]: I0123 16:54:43.365174 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:43 crc kubenswrapper[4606]: I0123 16:54:43.365311 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:43 crc kubenswrapper[4606]: E0123 16:54:43.365937 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:43 crc kubenswrapper[4606]: I0123 16:54:43.365329 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:43 crc kubenswrapper[4606]: E0123 16:54:43.365995 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:43 crc kubenswrapper[4606]: E0123 16:54:43.365877 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:44 crc kubenswrapper[4606]: I0123 16:54:44.365473 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:44 crc kubenswrapper[4606]: E0123 16:54:44.365740 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:45 crc kubenswrapper[4606]: I0123 16:54:45.364706 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:45 crc kubenswrapper[4606]: E0123 16:54:45.365022 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:45 crc kubenswrapper[4606]: I0123 16:54:45.364821 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:45 crc kubenswrapper[4606]: E0123 16:54:45.365221 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:45 crc kubenswrapper[4606]: I0123 16:54:45.364713 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:45 crc kubenswrapper[4606]: E0123 16:54:45.365490 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:46 crc kubenswrapper[4606]: I0123 16:54:46.185488 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/1.log" Jan 23 16:54:46 crc kubenswrapper[4606]: I0123 16:54:46.186072 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/0.log" Jan 23 16:54:46 crc kubenswrapper[4606]: I0123 16:54:46.186115 4606 generic.go:334] "Generic (PLEG): container finished" podID="2692a882-403f-4842-b756-53ed7a38fa7f" containerID="e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1" exitCode=1 Jan 23 16:54:46 crc kubenswrapper[4606]: I0123 16:54:46.186149 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerDied","Data":"e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1"} Jan 23 16:54:46 crc kubenswrapper[4606]: I0123 16:54:46.186185 4606 scope.go:117] "RemoveContainer" containerID="7f3fd0d1db2ea4bd4a13a49d6a8d4aa71847db110de6943545ae9a9f4c3f6a0b" Jan 23 16:54:46 crc kubenswrapper[4606]: I0123 16:54:46.186775 4606 scope.go:117] "RemoveContainer" containerID="e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1" Jan 23 16:54:46 crc kubenswrapper[4606]: E0123 16:54:46.186968 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ftp8l_openshift-multus(2692a882-403f-4842-b756-53ed7a38fa7f)\"" pod="openshift-multus/multus-ftp8l" podUID="2692a882-403f-4842-b756-53ed7a38fa7f" Jan 23 16:54:46 crc kubenswrapper[4606]: I0123 16:54:46.367367 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:46 crc kubenswrapper[4606]: E0123 16:54:46.373650 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:46 crc kubenswrapper[4606]: E0123 16:54:46.388368 4606 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 16:54:46 crc kubenswrapper[4606]: E0123 16:54:46.532069 4606 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:54:47 crc kubenswrapper[4606]: I0123 16:54:47.191218 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/1.log" Jan 23 16:54:47 crc kubenswrapper[4606]: I0123 16:54:47.365565 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:47 crc kubenswrapper[4606]: I0123 16:54:47.365652 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:47 crc kubenswrapper[4606]: I0123 16:54:47.365570 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:47 crc kubenswrapper[4606]: E0123 16:54:47.365801 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:47 crc kubenswrapper[4606]: E0123 16:54:47.365960 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:47 crc kubenswrapper[4606]: E0123 16:54:47.366056 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:48 crc kubenswrapper[4606]: I0123 16:54:48.365337 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:48 crc kubenswrapper[4606]: E0123 16:54:48.365711 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:49 crc kubenswrapper[4606]: I0123 16:54:49.365949 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:49 crc kubenswrapper[4606]: I0123 16:54:49.365915 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:49 crc kubenswrapper[4606]: E0123 16:54:49.366571 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:49 crc kubenswrapper[4606]: I0123 16:54:49.366015 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:49 crc kubenswrapper[4606]: E0123 16:54:49.366664 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:49 crc kubenswrapper[4606]: E0123 16:54:49.366801 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:50 crc kubenswrapper[4606]: I0123 16:54:50.365520 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:50 crc kubenswrapper[4606]: E0123 16:54:50.365705 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:51 crc kubenswrapper[4606]: I0123 16:54:51.364877 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:51 crc kubenswrapper[4606]: I0123 16:54:51.364901 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:51 crc kubenswrapper[4606]: I0123 16:54:51.364900 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:51 crc kubenswrapper[4606]: E0123 16:54:51.365493 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:51 crc kubenswrapper[4606]: E0123 16:54:51.365843 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:51 crc kubenswrapper[4606]: E0123 16:54:51.365967 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:51 crc kubenswrapper[4606]: I0123 16:54:51.366229 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 16:54:51 crc kubenswrapper[4606]: E0123 16:54:51.533645 4606 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:54:52 crc kubenswrapper[4606]: I0123 16:54:52.197411 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qqwhl"] Jan 23 16:54:52 crc kubenswrapper[4606]: I0123 16:54:52.197538 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:52 crc kubenswrapper[4606]: E0123 16:54:52.197655 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:52 crc kubenswrapper[4606]: I0123 16:54:52.213468 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/3.log" Jan 23 16:54:52 crc kubenswrapper[4606]: I0123 16:54:52.216359 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerStarted","Data":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} Jan 23 16:54:52 crc kubenswrapper[4606]: I0123 16:54:52.216874 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:54:53 crc kubenswrapper[4606]: I0123 16:54:53.365422 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:53 crc kubenswrapper[4606]: I0123 16:54:53.365503 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:53 crc kubenswrapper[4606]: E0123 16:54:53.365914 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:53 crc kubenswrapper[4606]: I0123 16:54:53.365516 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:53 crc kubenswrapper[4606]: E0123 16:54:53.366002 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:53 crc kubenswrapper[4606]: E0123 16:54:53.366093 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:54 crc kubenswrapper[4606]: I0123 16:54:54.365575 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:54 crc kubenswrapper[4606]: E0123 16:54:54.365805 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:55 crc kubenswrapper[4606]: I0123 16:54:55.365460 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:55 crc kubenswrapper[4606]: I0123 16:54:55.365541 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:55 crc kubenswrapper[4606]: E0123 16:54:55.365674 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:55 crc kubenswrapper[4606]: I0123 16:54:55.365818 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:55 crc kubenswrapper[4606]: E0123 16:54:55.366146 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:55 crc kubenswrapper[4606]: E0123 16:54:55.366366 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:56 crc kubenswrapper[4606]: I0123 16:54:56.365202 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:56 crc kubenswrapper[4606]: E0123 16:54:56.366577 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:56 crc kubenswrapper[4606]: E0123 16:54:56.534684 4606 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 16:54:57 crc kubenswrapper[4606]: I0123 16:54:57.365191 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:57 crc kubenswrapper[4606]: I0123 16:54:57.365280 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:57 crc kubenswrapper[4606]: E0123 16:54:57.365337 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:57 crc kubenswrapper[4606]: I0123 16:54:57.365532 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:57 crc kubenswrapper[4606]: E0123 16:54:57.365674 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:54:57 crc kubenswrapper[4606]: E0123 16:54:57.365524 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:57 crc kubenswrapper[4606]: I0123 16:54:57.365995 4606 scope.go:117] "RemoveContainer" containerID="e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1" Jan 23 16:54:57 crc kubenswrapper[4606]: I0123 16:54:57.398762 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podStartSLOduration=109.398738799 podStartE2EDuration="1m49.398738799s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:54:52.253626858 +0000 UTC m=+126.253819284" watchObservedRunningTime="2026-01-23 16:54:57.398738799 +0000 UTC m=+131.398931215" Jan 23 16:54:58 crc kubenswrapper[4606]: I0123 16:54:58.238115 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/1.log" Jan 23 16:54:58 crc kubenswrapper[4606]: I0123 16:54:58.238459 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerStarted","Data":"fa71a124f07e22c3f4acfecce21b90563f36930a95ef0f7632c84065526c172e"} Jan 23 16:54:58 crc kubenswrapper[4606]: I0123 16:54:58.365551 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:54:58 crc kubenswrapper[4606]: E0123 16:54:58.365687 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:54:59 crc kubenswrapper[4606]: I0123 16:54:59.365740 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:54:59 crc kubenswrapper[4606]: I0123 16:54:59.365822 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:54:59 crc kubenswrapper[4606]: I0123 16:54:59.365899 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:54:59 crc kubenswrapper[4606]: E0123 16:54:59.365909 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:54:59 crc kubenswrapper[4606]: E0123 16:54:59.366097 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:54:59 crc kubenswrapper[4606]: E0123 16:54:59.366171 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:55:00 crc kubenswrapper[4606]: I0123 16:55:00.365351 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:55:00 crc kubenswrapper[4606]: E0123 16:55:00.365816 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qqwhl" podUID="bae240c5-8926-4fc6-b0d9-fdf64bacecfe" Jan 23 16:55:01 crc kubenswrapper[4606]: I0123 16:55:01.364875 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:55:01 crc kubenswrapper[4606]: I0123 16:55:01.364965 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:55:01 crc kubenswrapper[4606]: E0123 16:55:01.365082 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 16:55:01 crc kubenswrapper[4606]: I0123 16:55:01.364920 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:55:01 crc kubenswrapper[4606]: E0123 16:55:01.365257 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 16:55:01 crc kubenswrapper[4606]: E0123 16:55:01.365362 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 16:55:02 crc kubenswrapper[4606]: I0123 16:55:02.365104 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:55:02 crc kubenswrapper[4606]: I0123 16:55:02.368022 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 16:55:02 crc kubenswrapper[4606]: I0123 16:55:02.368252 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 16:55:03 crc kubenswrapper[4606]: I0123 16:55:03.365396 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:55:03 crc kubenswrapper[4606]: I0123 16:55:03.365430 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:55:03 crc kubenswrapper[4606]: I0123 16:55:03.365395 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:55:03 crc kubenswrapper[4606]: I0123 16:55:03.367734 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 16:55:03 crc kubenswrapper[4606]: I0123 16:55:03.367737 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 16:55:03 crc kubenswrapper[4606]: I0123 16:55:03.368299 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 16:55:03 crc kubenswrapper[4606]: I0123 16:55:03.369147 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.199439 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.235723 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vfv9l"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.236163 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.236684 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-stn97"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.236980 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-stn97" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.239647 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.240024 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.272749 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.273202 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.273380 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.273538 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.273734 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.273885 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: W0123 16:55:06.274026 4606 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 23 16:55:06 crc kubenswrapper[4606]: E0123 16:55:06.274061 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:55:06 crc kubenswrapper[4606]: W0123 16:55:06.274112 4606 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 23 16:55:06 crc kubenswrapper[4606]: E0123 16:55:06.274130 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:55:06 crc kubenswrapper[4606]: W0123 16:55:06.274172 4606 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 23 16:55:06 crc kubenswrapper[4606]: E0123 16:55:06.274185 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:55:06 crc kubenswrapper[4606]: W0123 16:55:06.274222 4606 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 23 16:55:06 crc kubenswrapper[4606]: E0123 16:55:06.274235 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:55:06 crc kubenswrapper[4606]: W0123 16:55:06.274293 4606 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 23 16:55:06 crc kubenswrapper[4606]: E0123 16:55:06.274312 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.274373 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.274529 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.276284 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.276785 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280114 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-policies\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280170 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkm2f\" (UniqueName: \"kubernetes.io/projected/5f5bac36-b78c-4f29-b08e-419fe3010b57-kube-api-access-vkm2f\") pod \"downloads-7954f5f757-stn97\" (UID: \"5f5bac36-b78c-4f29-b08e-419fe3010b57\") " pod="openshift-console/downloads-7954f5f757-stn97" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280202 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280232 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280283 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280307 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280336 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcr5s\" (UniqueName: \"kubernetes.io/projected/94efdf3d-0abf-4b5d-944f-efd289975ee1-kube-api-access-jcr5s\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280364 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280402 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280428 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-dir\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280452 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280482 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280510 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94efdf3d-0abf-4b5d-944f-efd289975ee1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280539 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280570 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280644 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94efdf3d-0abf-4b5d-944f-efd289975ee1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280675 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.280698 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjr8\" (UniqueName: \"kubernetes.io/projected/dc495cd6-f31a-4b33-9c14-12b37c25f47c-kube-api-access-qxjr8\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.281091 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-whw2p"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.282359 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.282731 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.282962 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.284744 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.302638 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-psdkv"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.303234 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.303268 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.303976 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.304945 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.305250 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.318827 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.319231 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.319327 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.339170 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.339856 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.372483 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6fb9c"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.372980 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382043 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-audit\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382091 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382131 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-audit-policies\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382155 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382178 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382198 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07901fe1-c88c-4400-b353-b0ebf0406e6d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382223 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcr5s\" (UniqueName: \"kubernetes.io/projected/94efdf3d-0abf-4b5d-944f-efd289975ee1-kube-api-access-jcr5s\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382243 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382263 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-etcd-client\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382282 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4424c8ad-c7cc-418f-a322-ef8764b076ca-audit-dir\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382304 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382324 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382346 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382367 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382400 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcc75\" (UniqueName: \"kubernetes.io/projected/281c0e56-ca86-4662-b312-2322d51c2bba-kube-api-access-dcc75\") pod \"cluster-samples-operator-665b6dd947-mjj29\" (UID: \"281c0e56-ca86-4662-b312-2322d51c2bba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382422 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-serving-cert\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382445 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tghnm\" (UniqueName: \"kubernetes.io/projected/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-kube-api-access-tghnm\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382471 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-dir\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382497 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382523 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldvkr\" (UniqueName: \"kubernetes.io/projected/07901fe1-c88c-4400-b353-b0ebf0406e6d-kube-api-access-ldvkr\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382550 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382576 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382675 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07901fe1-c88c-4400-b353-b0ebf0406e6d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382699 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94efdf3d-0abf-4b5d-944f-efd289975ee1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382722 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382746 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382762 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382793 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-serving-cert\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382808 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07901fe1-c88c-4400-b353-b0ebf0406e6d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382825 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94efdf3d-0abf-4b5d-944f-efd289975ee1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382842 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382859 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjr8\" (UniqueName: \"kubernetes.io/projected/dc495cd6-f31a-4b33-9c14-12b37c25f47c-kube-api-access-qxjr8\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382875 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-etcd-client\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382893 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382907 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-encryption-config\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382944 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxds9\" (UniqueName: \"kubernetes.io/projected/4424c8ad-c7cc-418f-a322-ef8764b076ca-kube-api-access-fxds9\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382963 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-node-pullsecrets\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382985 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-dir\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.382994 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-config\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383014 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-policies\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383031 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-encryption-config\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383046 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/281c0e56-ca86-4662-b312-2322d51c2bba-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mjj29\" (UID: \"281c0e56-ca86-4662-b312-2322d51c2bba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383063 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-image-import-ca\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383078 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-audit-dir\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383096 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkm2f\" (UniqueName: \"kubernetes.io/projected/5f5bac36-b78c-4f29-b08e-419fe3010b57-kube-api-access-vkm2f\") pod \"downloads-7954f5f757-stn97\" (UID: \"5f5bac36-b78c-4f29-b08e-419fe3010b57\") " pod="openshift-console/downloads-7954f5f757-stn97" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383112 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383128 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzh6\" (UniqueName: \"kubernetes.io/projected/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-kube-api-access-djzh6\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383283 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383377 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.383684 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-policies\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.389626 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.389759 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.395502 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ddvp8"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.398262 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.398962 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.400648 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.402936 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.407369 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.408432 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.409361 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.409878 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.410020 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.410171 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.410332 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.410623 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.410825 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.410972 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.411117 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.411254 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.411388 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.411537 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.411868 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.412028 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.412169 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.412314 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.412454 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.412615 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.413777 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.413970 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.414108 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.414248 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.414381 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.414504 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.414662 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.414799 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.414942 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.415088 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.415226 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.417293 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.419800 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.420785 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.421038 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.421069 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.421695 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.422109 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.423251 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.426170 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.426321 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.450912 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkm2f\" (UniqueName: \"kubernetes.io/projected/5f5bac36-b78c-4f29-b08e-419fe3010b57-kube-api-access-vkm2f\") pod \"downloads-7954f5f757-stn97\" (UID: \"5f5bac36-b78c-4f29-b08e-419fe3010b57\") " pod="openshift-console/downloads-7954f5f757-stn97" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.451671 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.453535 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.455810 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kpm7l"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.456475 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wlpwz"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.457474 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.460836 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2b7xg"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.485897 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.491249 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.491647 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.491990 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.492343 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.493134 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjr8\" (UniqueName: \"kubernetes.io/projected/dc495cd6-f31a-4b33-9c14-12b37c25f47c-kube-api-access-qxjr8\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.493493 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.493699 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.493714 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.494305 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.497363 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499020 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499045 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499232 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499337 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499494 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499510 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499531 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499613 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499655 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499708 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499716 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499863 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.499973 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.500044 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.500104 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.500109 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.500274 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.501238 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.501733 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.501818 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-encryption-config\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.502030 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxds9\" (UniqueName: \"kubernetes.io/projected/4424c8ad-c7cc-418f-a322-ef8764b076ca-kube-api-access-fxds9\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.502054 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-node-pullsecrets\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.502269 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-node-pullsecrets\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.502319 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-config\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.503063 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-config\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.503218 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-service-ca-bundle\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.503251 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-encryption-config\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.503270 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/281c0e56-ca86-4662-b312-2322d51c2bba-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mjj29\" (UID: \"281c0e56-ca86-4662-b312-2322d51c2bba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505497 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-stn97"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505541 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505737 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505794 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-image-import-ca\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505812 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tzrgs"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505816 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-audit-dir\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505872 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzh6\" (UniqueName: \"kubernetes.io/projected/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-kube-api-access-djzh6\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505890 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-audit\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505912 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbpl\" (UniqueName: \"kubernetes.io/projected/f542164e-05c2-4746-988b-0b4676772078-kube-api-access-9tbpl\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505942 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-audit-policies\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505958 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505977 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07901fe1-c88c-4400-b353-b0ebf0406e6d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506000 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-etcd-client\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506015 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4424c8ad-c7cc-418f-a322-ef8764b076ca-audit-dir\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506042 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506059 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506074 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506111 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcc75\" (UniqueName: \"kubernetes.io/projected/281c0e56-ca86-4662-b312-2322d51c2bba-kube-api-access-dcc75\") pod \"cluster-samples-operator-665b6dd947-mjj29\" (UID: \"281c0e56-ca86-4662-b312-2322d51c2bba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506129 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-serving-cert\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506144 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tghnm\" (UniqueName: \"kubernetes.io/projected/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-kube-api-access-tghnm\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506160 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506282 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-audit-dir\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506402 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vjv2l"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506884 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-image-import-ca\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.507000 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-audit\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.507408 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.507458 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-audit-policies\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.507942 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.508237 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4424c8ad-c7cc-418f-a322-ef8764b076ca-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.511367 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.515952 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-encryption-config\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.516149 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-encryption-config\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.505506 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.516632 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.516839 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.506160 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldvkr\" (UniqueName: \"kubernetes.io/projected/07901fe1-c88c-4400-b353-b0ebf0406e6d-kube-api-access-ldvkr\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.517031 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.518073 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.518438 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.518491 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07901fe1-c88c-4400-b353-b0ebf0406e6d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.519490 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07901fe1-c88c-4400-b353-b0ebf0406e6d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.519529 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-config\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.519571 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.520240 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.524098 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.524148 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-serving-cert\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.524170 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07901fe1-c88c-4400-b353-b0ebf0406e6d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.525318 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-serving-cert\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.526736 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.526912 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.527023 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.527410 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.527490 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.527555 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.527693 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.533692 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/281c0e56-ca86-4662-b312-2322d51c2bba-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mjj29\" (UID: \"281c0e56-ca86-4662-b312-2322d51c2bba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.534103 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07901fe1-c88c-4400-b353-b0ebf0406e6d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.534581 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4424c8ad-c7cc-418f-a322-ef8764b076ca-audit-dir\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.534693 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f542164e-05c2-4746-988b-0b4676772078-serving-cert\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.534833 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-etcd-client\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.536836 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-serving-cert\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.537225 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-serving-cert\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.537649 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.538050 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.539772 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.539970 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.540248 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.540493 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.540796 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.540980 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.541543 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.541577 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.541802 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.541989 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.542002 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.542963 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-etcd-client\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.542986 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vfv9l\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.544716 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.545182 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.545963 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tghnm\" (UniqueName: \"kubernetes.io/projected/9e9e2c8c-5d03-48b0-ab9c-110eeef212b5-kube-api-access-tghnm\") pod \"openshift-config-operator-7777fb866f-whw2p\" (UID: \"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.547756 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.549206 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.549626 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.549638 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2bspd"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.551006 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.552658 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4424c8ad-c7cc-418f-a322-ef8764b076ca-etcd-client\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.552751 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c9dlf"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.555793 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.557352 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.558225 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.560534 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldvkr\" (UniqueName: \"kubernetes.io/projected/07901fe1-c88c-4400-b353-b0ebf0406e6d-kube-api-access-ldvkr\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.563996 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.565017 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.567794 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxds9\" (UniqueName: \"kubernetes.io/projected/4424c8ad-c7cc-418f-a322-ef8764b076ca-kube-api-access-fxds9\") pod \"apiserver-7bbb656c7d-qskqc\" (UID: \"4424c8ad-c7cc-418f-a322-ef8764b076ca\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.567929 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.572613 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.578131 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.580344 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzh6\" (UniqueName: \"kubernetes.io/projected/f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9-kube-api-access-djzh6\") pod \"apiserver-76f77b778f-psdkv\" (UID: \"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9\") " pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.581147 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vfv9l"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.581428 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-stn97" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.584409 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.586505 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.587737 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.591033 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.592417 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.594358 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.594634 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.595198 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.595778 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcc75\" (UniqueName: \"kubernetes.io/projected/281c0e56-ca86-4662-b312-2322d51c2bba-kube-api-access-dcc75\") pod \"cluster-samples-operator-665b6dd947-mjj29\" (UID: \"281c0e56-ca86-4662-b312-2322d51c2bba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.620191 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.620360 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-krwqd"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.621282 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.622058 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-whw2p"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.627172 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.629648 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.629988 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.636573 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.636630 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637378 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637509 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-trusted-ca-bundle\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637545 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-config\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637571 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f542164e-05c2-4746-988b-0b4676772078-serving-cert\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637622 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f6f9174-b914-45f1-b60c-7c88dfed6c14-trusted-ca\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637647 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-oauth-serving-cert\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637668 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27gt\" (UniqueName: \"kubernetes.io/projected/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-kube-api-access-d27gt\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637689 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eecab59c-60d2-4a32-955c-dbf919399422-images\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637737 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-config\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637763 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-service-ca-bundle\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637784 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-auth-proxy-config\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637811 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8426\" (UniqueName: \"kubernetes.io/projected/767b5dce-2e44-401a-b7af-3cb97ff48f69-kube-api-access-t8426\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637838 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tbpl\" (UniqueName: \"kubernetes.io/projected/f542164e-05c2-4746-988b-0b4676772078-kube-api-access-9tbpl\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637858 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-serving-cert\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637890 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637915 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-oauth-config\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637940 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rr57\" (UniqueName: \"kubernetes.io/projected/6f6f9174-b914-45f1-b60c-7c88dfed6c14-kube-api-access-6rr57\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.637963 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eecab59c-60d2-4a32-955c-dbf919399422-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638006 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638027 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-config\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638049 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-client-ca\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638068 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-machine-approver-tls\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638085 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-service-ca\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638101 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eecab59c-60d2-4a32-955c-dbf919399422-config\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638118 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-config\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638134 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f6f9174-b914-45f1-b60c-7c88dfed6c14-config\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638149 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4f2\" (UniqueName: \"kubernetes.io/projected/eecab59c-60d2-4a32-955c-dbf919399422-kube-api-access-5h4f2\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638169 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6f9174-b914-45f1-b60c-7c88dfed6c14-serving-cert\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638182 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53dfb30-7e32-472e-8602-16c28f0a2b03-serving-cert\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638199 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxkz5\" (UniqueName: \"kubernetes.io/projected/d53dfb30-7e32-472e-8602-16c28f0a2b03-kube-api-access-fxkz5\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.638877 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-config\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.639030 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-service-ca-bundle\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.640149 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f542164e-05c2-4746-988b-0b4676772078-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.641872 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.642731 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.643462 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.646922 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.648243 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f542164e-05c2-4746-988b-0b4676772078-serving-cert\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.649854 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.650479 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.652938 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzn5r"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.658985 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.660624 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6fb9c"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.660652 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.661070 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.661330 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vrcsb"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.661651 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.661685 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.662115 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.662278 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.662628 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ddvp8"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.664205 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.666364 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2b7xg"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.667566 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8fjr8"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.667992 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.668459 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.668849 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-psdkv"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.670283 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mbmmh"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.671160 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.671781 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kpm7l"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.673014 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n5jq4"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.678144 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.681644 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.682293 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.683060 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.683048 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vjv2l"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.692069 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tzrgs"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.694189 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.695401 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.697213 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2bspd"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.699121 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.700806 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.702415 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.704927 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wlpwz"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.706172 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.707955 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.709678 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.709717 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hms7g"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.730720 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.731195 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c9dlf"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.731225 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.731236 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.731314 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.736952 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.742405 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759314 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-machine-approver-tls\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759349 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-service-ca\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759404 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eecab59c-60d2-4a32-955c-dbf919399422-config\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759429 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f6f9174-b914-45f1-b60c-7c88dfed6c14-config\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759451 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h4f2\" (UniqueName: \"kubernetes.io/projected/eecab59c-60d2-4a32-955c-dbf919399422-kube-api-access-5h4f2\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759476 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6f9174-b914-45f1-b60c-7c88dfed6c14-serving-cert\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759498 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53dfb30-7e32-472e-8602-16c28f0a2b03-serving-cert\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759528 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxkz5\" (UniqueName: \"kubernetes.io/projected/d53dfb30-7e32-472e-8602-16c28f0a2b03-kube-api-access-fxkz5\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759580 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-trusted-ca-bundle\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759623 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-config\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759646 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f6f9174-b914-45f1-b60c-7c88dfed6c14-trusted-ca\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759677 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27gt\" (UniqueName: \"kubernetes.io/projected/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-kube-api-access-d27gt\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759700 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-oauth-serving-cert\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759721 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eecab59c-60d2-4a32-955c-dbf919399422-images\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759760 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-config\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759787 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-auth-proxy-config\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759818 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8426\" (UniqueName: \"kubernetes.io/projected/767b5dce-2e44-401a-b7af-3cb97ff48f69-kube-api-access-t8426\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759853 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-serving-cert\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759885 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-oauth-config\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759905 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rr57\" (UniqueName: \"kubernetes.io/projected/6f6f9174-b914-45f1-b60c-7c88dfed6c14-kube-api-access-6rr57\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759924 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eecab59c-60d2-4a32-955c-dbf919399422-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759964 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.759982 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-config\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.760005 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-client-ca\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.768395 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-trusted-ca-bundle\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.768699 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.769601 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eecab59c-60d2-4a32-955c-dbf919399422-config\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.770282 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-service-ca\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.770741 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f6f9174-b914-45f1-b60c-7c88dfed6c14-config\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.772163 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.772639 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-machine-approver-tls\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.774001 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.774060 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.776736 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-client-ca\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.776988 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-config\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.777208 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-auth-proxy-config\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.778031 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f6f9174-b914-45f1-b60c-7c88dfed6c14-trusted-ca\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.778108 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.778470 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-config\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.778565 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-oauth-serving-cert\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.778934 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.779214 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-config\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.779699 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eecab59c-60d2-4a32-955c-dbf919399422-images\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.780609 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mbmmh"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.781775 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-krwqd"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.783065 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-oauth-config\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.785754 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.787365 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/eecab59c-60d2-4a32-955c-dbf919399422-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.787869 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n5jq4"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.788970 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.789311 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53dfb30-7e32-472e-8602-16c28f0a2b03-serving-cert\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.791077 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzn5r"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.792384 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hms7g"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.793423 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-serving-cert\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.793548 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.794814 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6f9174-b914-45f1-b60c-7c88dfed6c14-serving-cert\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.802313 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.821003 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.838240 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.860048 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.923970 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.927412 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07901fe1-c88c-4400-b353-b0ebf0406e6d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-p7fm6\" (UID: \"07901fe1-c88c-4400-b353-b0ebf0406e6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.938504 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.959339 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.960634 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-whw2p"] Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.980112 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 16:55:06 crc kubenswrapper[4606]: I0123 16:55:06.998807 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.018492 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.019175 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc"] Jan 23 16:55:07 crc kubenswrapper[4606]: W0123 16:55:07.019530 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4424c8ad_c7cc_418f_a322_ef8764b076ca.slice/crio-eb738f0c0ba44ea46bf512e358f38bd823a5200fe88dd6af5f0f692966c149f2 WatchSource:0}: Error finding container eb738f0c0ba44ea46bf512e358f38bd823a5200fe88dd6af5f0f692966c149f2: Status 404 returned error can't find the container with id eb738f0c0ba44ea46bf512e358f38bd823a5200fe88dd6af5f0f692966c149f2 Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.039250 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.054269 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-psdkv"] Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.058489 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.077893 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-stn97"] Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.078562 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.087732 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vfv9l"] Jan 23 16:55:07 crc kubenswrapper[4606]: W0123 16:55:07.090804 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5bac36_b78c_4f29_b08e_419fe3010b57.slice/crio-94b77afbb71a978e36021c707a9a2f1cc042080ffc43b3e82239c5e19af919d2 WatchSource:0}: Error finding container 94b77afbb71a978e36021c707a9a2f1cc042080ffc43b3e82239c5e19af919d2: Status 404 returned error can't find the container with id 94b77afbb71a978e36021c707a9a2f1cc042080ffc43b3e82239c5e19af919d2 Jan 23 16:55:07 crc kubenswrapper[4606]: W0123 16:55:07.092890 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc495cd6_f31a_4b33_9c14_12b37c25f47c.slice/crio-ae87e16d03dd885dffb0b39fe6e2846bc4a46a12c3fa7969379f27d3f6d8ed8b WatchSource:0}: Error finding container ae87e16d03dd885dffb0b39fe6e2846bc4a46a12c3fa7969379f27d3f6d8ed8b: Status 404 returned error can't find the container with id ae87e16d03dd885dffb0b39fe6e2846bc4a46a12c3fa7969379f27d3f6d8ed8b Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.093730 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.098500 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.118333 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.138833 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.158710 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.178796 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.199034 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.204054 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29"] Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.218677 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.238477 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.259760 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.273082 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" event={"ID":"dc495cd6-f31a-4b33-9c14-12b37c25f47c","Type":"ContainerStarted","Data":"ae87e16d03dd885dffb0b39fe6e2846bc4a46a12c3fa7969379f27d3f6d8ed8b"} Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.275704 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" event={"ID":"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5","Type":"ContainerStarted","Data":"fe47c05dcfc8c77ee512c84b6bc167c5304200619f30a2917af0dae840dad8ca"} Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.278728 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.280042 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-stn97" event={"ID":"5f5bac36-b78c-4f29-b08e-419fe3010b57","Type":"ContainerStarted","Data":"94b77afbb71a978e36021c707a9a2f1cc042080ffc43b3e82239c5e19af919d2"} Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.281776 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" event={"ID":"4424c8ad-c7cc-418f-a322-ef8764b076ca","Type":"ContainerStarted","Data":"eb738f0c0ba44ea46bf512e358f38bd823a5200fe88dd6af5f0f692966c149f2"} Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.282788 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" event={"ID":"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9","Type":"ContainerStarted","Data":"83ba1cb618376f5be3d170924d6e4426b80e1e5d382e210d86631a4323aa2798"} Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.300440 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.319917 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.338652 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.359629 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.378715 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: E0123 16:55:07.383624 4606 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 16:55:07 crc kubenswrapper[4606]: E0123 16:55:07.383684 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94efdf3d-0abf-4b5d-944f-efd289975ee1-serving-cert podName:94efdf3d-0abf-4b5d-944f-efd289975ee1 nodeName:}" failed. No retries permitted until 2026-01-23 16:55:07.883665991 +0000 UTC m=+141.883858397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/94efdf3d-0abf-4b5d-944f-efd289975ee1-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-fbv4b" (UID: "94efdf3d-0abf-4b5d-944f-efd289975ee1") : failed to sync secret cache: timed out waiting for the condition Jan 23 16:55:07 crc kubenswrapper[4606]: E0123 16:55:07.383886 4606 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 16:55:07 crc kubenswrapper[4606]: E0123 16:55:07.383941 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94efdf3d-0abf-4b5d-944f-efd289975ee1-config podName:94efdf3d-0abf-4b5d-944f-efd289975ee1 nodeName:}" failed. No retries permitted until 2026-01-23 16:55:07.883907609 +0000 UTC m=+141.884100015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/94efdf3d-0abf-4b5d-944f-efd289975ee1-config") pod "openshift-apiserver-operator-796bbdcf4f-fbv4b" (UID: "94efdf3d-0abf-4b5d-944f-efd289975ee1") : failed to sync configmap cache: timed out waiting for the condition Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.412133 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.417858 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.439928 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.453290 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6"] Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.459118 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: W0123 16:55:07.476215 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07901fe1_c88c_4400_b353_b0ebf0406e6d.slice/crio-d76c294e3fbd308bba88e6050938788e5cb1e422e21e1f3c5fe07f32ab8f161a WatchSource:0}: Error finding container d76c294e3fbd308bba88e6050938788e5cb1e422e21e1f3c5fe07f32ab8f161a: Status 404 returned error can't find the container with id d76c294e3fbd308bba88e6050938788e5cb1e422e21e1f3c5fe07f32ab8f161a Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.479385 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 16:55:07 crc kubenswrapper[4606]: E0123 16:55:07.493028 4606 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.499205 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.518736 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.539342 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.559085 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.581320 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.599116 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.618335 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.636812 4606 request.go:700] Waited for 1.015169629s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.640653 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.658472 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.698746 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.719140 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.765232 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.778780 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.794939 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tbpl\" (UniqueName: \"kubernetes.io/projected/f542164e-05c2-4746-988b-0b4676772078-kube-api-access-9tbpl\") pod \"authentication-operator-69f744f599-kpm7l\" (UID: \"f542164e-05c2-4746-988b-0b4676772078\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.798943 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.818763 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.827562 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.839001 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.858739 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.879169 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.899318 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.926233 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.938483 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.960076 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.975276 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94efdf3d-0abf-4b5d-944f-efd289975ee1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.975339 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94efdf3d-0abf-4b5d-944f-efd289975ee1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.978403 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 16:55:07 crc kubenswrapper[4606]: I0123 16:55:07.999041 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.019042 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.042184 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.058943 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.077782 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.107322 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.118191 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.138193 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.159057 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.179501 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.198843 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.215797 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kpm7l"] Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.219060 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 16:55:08 crc kubenswrapper[4606]: W0123 16:55:08.225069 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf542164e_05c2_4746_988b_0b4676772078.slice/crio-1d768c067e2fed929fb09237db00e55fcd9d505b92e74b0b85f858104830e938 WatchSource:0}: Error finding container 1d768c067e2fed929fb09237db00e55fcd9d505b92e74b0b85f858104830e938: Status 404 returned error can't find the container with id 1d768c067e2fed929fb09237db00e55fcd9d505b92e74b0b85f858104830e938 Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.244217 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.258366 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.278350 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.287010 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-stn97" event={"ID":"5f5bac36-b78c-4f29-b08e-419fe3010b57","Type":"ContainerStarted","Data":"dac40f804ef82e6a89a82d7db4e76bb168f560be8fcb2829a18d4025745b7631"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.287230 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-stn97" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.288321 4606 generic.go:334] "Generic (PLEG): container finished" podID="9e9e2c8c-5d03-48b0-ab9c-110eeef212b5" containerID="771e67a3975de615581f23b1ca7c8aacb00d0198bd191603181b32bce08e7d61" exitCode=0 Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.288372 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" event={"ID":"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5","Type":"ContainerDied","Data":"771e67a3975de615581f23b1ca7c8aacb00d0198bd191603181b32bce08e7d61"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.288437 4606 patch_prober.go:28] interesting pod/downloads-7954f5f757-stn97 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.288724 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-stn97" podUID="5f5bac36-b78c-4f29-b08e-419fe3010b57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.290239 4606 generic.go:334] "Generic (PLEG): container finished" podID="4424c8ad-c7cc-418f-a322-ef8764b076ca" containerID="0c0e11c382e4bcb18dec5c0668c45c7cd4c3043c9817ad88c1a70a1a7c3e68a2" exitCode=0 Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.290303 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" event={"ID":"4424c8ad-c7cc-418f-a322-ef8764b076ca","Type":"ContainerDied","Data":"0c0e11c382e4bcb18dec5c0668c45c7cd4c3043c9817ad88c1a70a1a7c3e68a2"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.293230 4606 generic.go:334] "Generic (PLEG): container finished" podID="f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9" containerID="fad84eda7a6acede93551c1f0581b7f37d17854cd54e1765a32a77ebd6049f81" exitCode=0 Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.293282 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" event={"ID":"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9","Type":"ContainerDied","Data":"fad84eda7a6acede93551c1f0581b7f37d17854cd54e1765a32a77ebd6049f81"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.298343 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.300674 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" event={"ID":"f542164e-05c2-4746-988b-0b4676772078","Type":"ContainerStarted","Data":"1d768c067e2fed929fb09237db00e55fcd9d505b92e74b0b85f858104830e938"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.306240 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" event={"ID":"281c0e56-ca86-4662-b312-2322d51c2bba","Type":"ContainerStarted","Data":"a9fe794976c2e9d5bcdd5ace45f489dd8bc08916d14299d4f171a5215366e653"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.306282 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" event={"ID":"281c0e56-ca86-4662-b312-2322d51c2bba","Type":"ContainerStarted","Data":"ff24e45050340282e8ff47fa15c9c1d8f92a0f1b53da16005d41c7fd8168ee7f"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.306293 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" event={"ID":"281c0e56-ca86-4662-b312-2322d51c2bba","Type":"ContainerStarted","Data":"e7927b7206f082e163f41d54a1ea8743f02bb2baf3384a5c5dc09312d89fc43f"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.307695 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" event={"ID":"dc495cd6-f31a-4b33-9c14-12b37c25f47c","Type":"ContainerStarted","Data":"b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.308381 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.318964 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" event={"ID":"07901fe1-c88c-4400-b353-b0ebf0406e6d","Type":"ContainerStarted","Data":"5057ca5de7772c81488c476c8fed5bcd37ce64e0599862cfc90d2991abc8452d"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.320877 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" event={"ID":"07901fe1-c88c-4400-b353-b0ebf0406e6d","Type":"ContainerStarted","Data":"d76c294e3fbd308bba88e6050938788e5cb1e422e21e1f3c5fe07f32ab8f161a"} Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.321737 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.338916 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.358427 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.378183 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.399077 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.418986 4606 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.439046 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.458653 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.478477 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 16:55:08 crc kubenswrapper[4606]: E0123 16:55:08.494259 4606 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 16:55:08 crc kubenswrapper[4606]: E0123 16:55:08.494368 4606 projected.go:194] Error preparing data for projected volume kube-api-access-jcr5s for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b: failed to sync configmap cache: timed out waiting for the condition Jan 23 16:55:08 crc kubenswrapper[4606]: E0123 16:55:08.494473 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94efdf3d-0abf-4b5d-944f-efd289975ee1-kube-api-access-jcr5s podName:94efdf3d-0abf-4b5d-944f-efd289975ee1 nodeName:}" failed. No retries permitted until 2026-01-23 16:55:08.994454331 +0000 UTC m=+142.994646737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jcr5s" (UniqueName: "kubernetes.io/projected/94efdf3d-0abf-4b5d-944f-efd289975ee1-kube-api-access-jcr5s") pod "openshift-apiserver-operator-796bbdcf4f-fbv4b" (UID: "94efdf3d-0abf-4b5d-944f-efd289975ee1") : failed to sync configmap cache: timed out waiting for the condition Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.498726 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.544388 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27gt\" (UniqueName: \"kubernetes.io/projected/f33f2b3a-d022-41f9-9a1f-b8bb130128c7-kube-api-access-d27gt\") pod \"machine-approver-56656f9798-nqpmj\" (UID: \"f33f2b3a-d022-41f9-9a1f-b8bb130128c7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.563179 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h4f2\" (UniqueName: \"kubernetes.io/projected/eecab59c-60d2-4a32-955c-dbf919399422-kube-api-access-5h4f2\") pod \"machine-api-operator-5694c8668f-2b7xg\" (UID: \"eecab59c-60d2-4a32-955c-dbf919399422\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.570427 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxkz5\" (UniqueName: \"kubernetes.io/projected/d53dfb30-7e32-472e-8602-16c28f0a2b03-kube-api-access-fxkz5\") pod \"controller-manager-879f6c89f-ddvp8\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.593504 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8426\" (UniqueName: \"kubernetes.io/projected/767b5dce-2e44-401a-b7af-3cb97ff48f69-kube-api-access-t8426\") pod \"console-f9d7485db-wlpwz\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.611723 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rr57\" (UniqueName: \"kubernetes.io/projected/6f6f9174-b914-45f1-b60c-7c88dfed6c14-kube-api-access-6rr57\") pod \"console-operator-58897d9998-6fb9c\" (UID: \"6f6f9174-b914-45f1-b60c-7c88dfed6c14\") " pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.633839 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.637068 4606 request.go:700] Waited for 1.356452778s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0 Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.639015 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.658352 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.671171 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.678930 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.689471 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94efdf3d-0abf-4b5d-944f-efd289975ee1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.697532 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.698700 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.739073 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.919044 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94efdf3d-0abf-4b5d-944f-efd289975ee1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.921859 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.922702 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-tls\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.922730 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.922865 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.922999 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ec26ef9-e655-476f-ac27-f27942209b6f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:08 crc kubenswrapper[4606]: E0123 16:55:08.923439 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.423420041 +0000 UTC m=+143.423612467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:08 crc kubenswrapper[4606]: I0123 16:55:08.929039 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.023918 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.024167 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.524143415 +0000 UTC m=+143.524335821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024441 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0f78495-0b98-497a-9a7d-c93be8decda5-profile-collector-cert\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024474 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ba3301-a42f-4a20-9434-37e2fca447c9-config\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024499 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2347c841-e571-46f3-b06e-8814dea8d423-metrics-tls\") pod \"dns-operator-744455d44c-tzrgs\" (UID: \"2347c841-e571-46f3-b06e-8814dea8d423\") " pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024520 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa355f0-0cef-464e-af52-d92cd53c090a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024545 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0b8142-384b-4f00-b129-37115fd01372-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024611 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-serving-cert\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024667 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa355f0-0cef-464e-af52-d92cd53c090a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024761 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ec26ef9-e655-476f-ac27-f27942209b6f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024782 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/37f49603-1b09-4a49-8bc5-240b4d10bb46-signing-cabundle\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024810 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbtl\" (UniqueName: \"kubernetes.io/projected/b470f1c1-e30e-401e-9bba-16d821bd7b4f-kube-api-access-9bbtl\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024869 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrxbc\" (UniqueName: \"kubernetes.io/projected/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-kube-api-access-rrxbc\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024893 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54nk8\" (UniqueName: \"kubernetes.io/projected/f51caafd-44cb-4d91-874e-d8fe0221d01d-kube-api-access-54nk8\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024913 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/37f49603-1b09-4a49-8bc5-240b4d10bb46-signing-key\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024974 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f51caafd-44cb-4d91-874e-d8fe0221d01d-srv-cert\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.024995 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-bound-sa-token\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025051 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvk6\" (UniqueName: \"kubernetes.io/projected/28446a05-f08d-465c-ac42-782a677b3511-kube-api-access-qmvk6\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025070 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-ca\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025088 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ec26ef9-e655-476f-ac27-f27942209b6f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025105 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p64fp\" (UniqueName: \"kubernetes.io/projected/f0f78495-0b98-497a-9a7d-c93be8decda5-kube-api-access-p64fp\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025120 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-serving-cert\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025161 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b470f1c1-e30e-401e-9bba-16d821bd7b4f-apiservice-cert\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025178 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcr5s\" (UniqueName: \"kubernetes.io/projected/94efdf3d-0abf-4b5d-944f-efd289975ee1-kube-api-access-jcr5s\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025220 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-service-ca\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025243 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f2f224-7031-4b81-ba0c-33e9de7311f0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pdwg7\" (UID: \"b5f2f224-7031-4b81-ba0c-33e9de7311f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025277 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ba3301-a42f-4a20-9434-37e2fca447c9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025295 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g664f\" (UniqueName: \"kubernetes.io/projected/ac74f2a9-a8bf-4787-b705-1cab5f1c56aa-kube-api-access-g664f\") pod \"multus-admission-controller-857f4d67dd-c9dlf\" (UID: \"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025311 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgkb\" (UniqueName: \"kubernetes.io/projected/6fa355f0-0cef-464e-af52-d92cd53c090a-kube-api-access-zbgkb\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025326 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d31efc-5fc5-4e4e-a893-93161b48002a-serving-cert\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025350 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-config\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025393 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/205e3ea9-e17d-4581-bfc0-eef3cb89dc12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c42wt\" (UID: \"205e3ea9-e17d-4581-bfc0-eef3cb89dc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025418 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-client-ca\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025432 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vndf2\" (UniqueName: \"kubernetes.io/projected/205e3ea9-e17d-4581-bfc0-eef3cb89dc12-kube-api-access-vndf2\") pod \"package-server-manager-789f6589d5-c42wt\" (UID: \"205e3ea9-e17d-4581-bfc0-eef3cb89dc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025464 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-certificates\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.025479 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-trusted-ca\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.027766 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6b4\" (UniqueName: \"kubernetes.io/projected/2347c841-e571-46f3-b06e-8814dea8d423-kube-api-access-zt6b4\") pod \"dns-operator-744455d44c-tzrgs\" (UID: \"2347c841-e571-46f3-b06e-8814dea8d423\") " pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.027812 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-config\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.027843 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b470f1c1-e30e-401e-9bba-16d821bd7b4f-tmpfs\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.027886 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f51caafd-44cb-4d91-874e-d8fe0221d01d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.027902 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b470f1c1-e30e-401e-9bba-16d821bd7b4f-webhook-cert\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.027942 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e0b8142-384b-4f00-b129-37115fd01372-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.028151 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.028172 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0b8142-384b-4f00-b129-37115fd01372-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.029219 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.529209637 +0000 UTC m=+143.529402043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029565 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78e9dd5c-8577-4ff2-affa-519a64352ac9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029612 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5d2p\" (UniqueName: \"kubernetes.io/projected/37f49603-1b09-4a49-8bc5-240b4d10bb46-kube-api-access-m5d2p\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029630 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e9dd5c-8577-4ff2-affa-519a64352ac9-config\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029646 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28446a05-f08d-465c-ac42-782a677b3511-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029664 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-client\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029708 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78e9dd5c-8577-4ff2-affa-519a64352ac9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029724 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28446a05-f08d-465c-ac42-782a677b3511-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029751 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-config\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029826 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszg2\" (UniqueName: \"kubernetes.io/projected/75d31efc-5fc5-4e4e-a893-93161b48002a-kube-api-access-dszg2\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029843 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75ba3301-a42f-4a20-9434-37e2fca447c9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.029968 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgdkw\" (UniqueName: \"kubernetes.io/projected/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-kube-api-access-fgdkw\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.030122 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgqcs\" (UniqueName: \"kubernetes.io/projected/b5f2f224-7031-4b81-ba0c-33e9de7311f0-kube-api-access-mgqcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-pdwg7\" (UID: \"b5f2f224-7031-4b81-ba0c-33e9de7311f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.030166 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-tls\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.030193 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzphp\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-kube-api-access-rzphp\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.030211 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0f78495-0b98-497a-9a7d-c93be8decda5-srv-cert\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.030255 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac74f2a9-a8bf-4787-b705-1cab5f1c56aa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c9dlf\" (UID: \"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.031270 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ec26ef9-e655-476f-ac27-f27942209b6f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.048095 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-tls\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.063979 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcr5s\" (UniqueName: \"kubernetes.io/projected/94efdf3d-0abf-4b5d-944f-efd289975ee1-kube-api-access-jcr5s\") pod \"openshift-apiserver-operator-796bbdcf4f-fbv4b\" (UID: \"94efdf3d-0abf-4b5d-944f-efd289975ee1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.130848 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.130991 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.630969686 +0000 UTC m=+143.631162092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131012 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ba3301-a42f-4a20-9434-37e2fca447c9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131034 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g664f\" (UniqueName: \"kubernetes.io/projected/ac74f2a9-a8bf-4787-b705-1cab5f1c56aa-kube-api-access-g664f\") pod \"multus-admission-controller-857f4d67dd-c9dlf\" (UID: \"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131053 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-csi-data-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131073 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d31efc-5fc5-4e4e-a893-93161b48002a-serving-cert\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131089 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbgkb\" (UniqueName: \"kubernetes.io/projected/6fa355f0-0cef-464e-af52-d92cd53c090a-kube-api-access-zbgkb\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131110 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb6c50be-9bef-40c5-8c37-ec6435d574ee-images\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131124 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-socket-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131139 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23c38e5b-d867-490a-b4d8-b5b4fe00b839-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131154 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgftk\" (UniqueName: \"kubernetes.io/projected/1a2f8837-0976-4952-b9af-6e16dad9c91f-kube-api-access-qgftk\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131172 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-config\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131188 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jfm\" (UniqueName: \"kubernetes.io/projected/23c38e5b-d867-490a-b4d8-b5b4fe00b839-kube-api-access-c4jfm\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131205 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/205e3ea9-e17d-4581-bfc0-eef3cb89dc12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c42wt\" (UID: \"205e3ea9-e17d-4581-bfc0-eef3cb89dc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131219 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-registration-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131235 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1a2f8837-0976-4952-b9af-6e16dad9c91f-node-bootstrap-token\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131250 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-client-ca\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131269 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vndf2\" (UniqueName: \"kubernetes.io/projected/205e3ea9-e17d-4581-bfc0-eef3cb89dc12-kube-api-access-vndf2\") pod \"package-server-manager-789f6589d5-c42wt\" (UID: \"205e3ea9-e17d-4581-bfc0-eef3cb89dc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131284 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55r8s\" (UniqueName: \"kubernetes.io/projected/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-kube-api-access-55r8s\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131306 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-trusted-ca\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131320 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb6c50be-9bef-40c5-8c37-ec6435d574ee-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131342 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-certificates\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131358 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-config\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131374 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6b4\" (UniqueName: \"kubernetes.io/projected/2347c841-e571-46f3-b06e-8814dea8d423-kube-api-access-zt6b4\") pod \"dns-operator-744455d44c-tzrgs\" (UID: \"2347c841-e571-46f3-b06e-8814dea8d423\") " pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131389 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b470f1c1-e30e-401e-9bba-16d821bd7b4f-tmpfs\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131407 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f51caafd-44cb-4d91-874e-d8fe0221d01d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131424 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b470f1c1-e30e-401e-9bba-16d821bd7b4f-webhook-cert\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131440 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e0b8142-384b-4f00-b129-37115fd01372-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131457 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131475 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0b8142-384b-4f00-b129-37115fd01372-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131507 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjcz\" (UniqueName: \"kubernetes.io/projected/10da464d-1ee1-401e-8f78-dafaa4d97ea4-kube-api-access-ptjcz\") pod \"ingress-canary-mbmmh\" (UID: \"10da464d-1ee1-401e-8f78-dafaa4d97ea4\") " pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131524 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78e9dd5c-8577-4ff2-affa-519a64352ac9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131540 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5d2p\" (UniqueName: \"kubernetes.io/projected/37f49603-1b09-4a49-8bc5-240b4d10bb46-kube-api-access-m5d2p\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131555 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e9dd5c-8577-4ff2-affa-519a64352ac9-config\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131571 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-default-certificate\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131608 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/559500a2-4399-42d0-832b-719429dd6fd9-secret-volume\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131627 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28446a05-f08d-465c-ac42-782a677b3511-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131642 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-client\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131658 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78e9dd5c-8577-4ff2-affa-519a64352ac9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131672 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28446a05-f08d-465c-ac42-782a677b3511-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131687 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131704 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-config\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131721 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jl4c\" (UniqueName: \"kubernetes.io/projected/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-kube-api-access-6jl4c\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131736 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-config-volume\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131752 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dszg2\" (UniqueName: \"kubernetes.io/projected/75d31efc-5fc5-4e4e-a893-93161b48002a-kube-api-access-dszg2\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131768 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75ba3301-a42f-4a20-9434-37e2fca447c9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131784 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgdkw\" (UniqueName: \"kubernetes.io/projected/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-kube-api-access-fgdkw\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131808 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca7fead-bd4b-439e-8c7a-d630325aa36c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131834 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca7fead-bd4b-439e-8c7a-d630325aa36c-proxy-tls\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131851 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z96wv\" (UniqueName: \"kubernetes.io/projected/d78dc70d-59e8-4235-b345-f523b414fb31-kube-api-access-z96wv\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131867 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgqcs\" (UniqueName: \"kubernetes.io/projected/b5f2f224-7031-4b81-ba0c-33e9de7311f0-kube-api-access-mgqcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-pdwg7\" (UID: \"b5f2f224-7031-4b81-ba0c-33e9de7311f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131883 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9c27\" (UniqueName: \"kubernetes.io/projected/eca7fead-bd4b-439e-8c7a-d630325aa36c-kube-api-access-p9c27\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131908 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzphp\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-kube-api-access-rzphp\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131923 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0f78495-0b98-497a-9a7d-c93be8decda5-srv-cert\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131957 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac74f2a9-a8bf-4787-b705-1cab5f1c56aa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c9dlf\" (UID: \"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131971 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1a2f8837-0976-4952-b9af-6e16dad9c91f-certs\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.131988 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23c38e5b-d867-490a-b4d8-b5b4fe00b839-metrics-tls\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132005 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ba3301-a42f-4a20-9434-37e2fca447c9-config\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132019 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0f78495-0b98-497a-9a7d-c93be8decda5-profile-collector-cert\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132035 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhq6\" (UniqueName: \"kubernetes.io/projected/fb6c50be-9bef-40c5-8c37-ec6435d574ee-kube-api-access-zxhq6\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132061 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0b8142-384b-4f00-b129-37115fd01372-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132076 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23c38e5b-d867-490a-b4d8-b5b4fe00b839-trusted-ca\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132092 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2347c841-e571-46f3-b06e-8814dea8d423-metrics-tls\") pod \"dns-operator-744455d44c-tzrgs\" (UID: \"2347c841-e571-46f3-b06e-8814dea8d423\") " pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132106 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa355f0-0cef-464e-af52-d92cd53c090a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132121 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-serving-cert\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132135 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10da464d-1ee1-401e-8f78-dafaa4d97ea4-cert\") pod \"ingress-canary-mbmmh\" (UID: \"10da464d-1ee1-401e-8f78-dafaa4d97ea4\") " pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132151 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa355f0-0cef-464e-af52-d92cd53c090a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132165 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-metrics-tls\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132181 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9cp\" (UniqueName: \"kubernetes.io/projected/f4bb3e14-2008-4060-ac74-cb8d90c6d63f-kube-api-access-fz9cp\") pod \"migrator-59844c95c7-jwmbk\" (UID: \"f4bb3e14-2008-4060-ac74-cb8d90c6d63f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132206 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ec26ef9-e655-476f-ac27-f27942209b6f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132222 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132238 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-plugins-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132261 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/37f49603-1b09-4a49-8bc5-240b4d10bb46-signing-cabundle\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132276 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-metrics-certs\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132295 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbtl\" (UniqueName: \"kubernetes.io/projected/b470f1c1-e30e-401e-9bba-16d821bd7b4f-kube-api-access-9bbtl\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132310 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-stats-auth\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132334 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwvq\" (UniqueName: \"kubernetes.io/projected/559500a2-4399-42d0-832b-719429dd6fd9-kube-api-access-7jwvq\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132351 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrxbc\" (UniqueName: \"kubernetes.io/projected/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-kube-api-access-rrxbc\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132365 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bws7x\" (UniqueName: \"kubernetes.io/projected/98245824-432e-48b3-8bd1-a8f7cc2e42d2-kube-api-access-bws7x\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132383 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54nk8\" (UniqueName: \"kubernetes.io/projected/f51caafd-44cb-4d91-874e-d8fe0221d01d-kube-api-access-54nk8\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132397 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/37f49603-1b09-4a49-8bc5-240b4d10bb46-signing-key\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132411 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98245824-432e-48b3-8bd1-a8f7cc2e42d2-service-ca-bundle\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132427 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f51caafd-44cb-4d91-874e-d8fe0221d01d-srv-cert\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132442 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-bound-sa-token\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132459 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb6c50be-9bef-40c5-8c37-ec6435d574ee-proxy-tls\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132489 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-mountpoint-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132505 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p64fp\" (UniqueName: \"kubernetes.io/projected/f0f78495-0b98-497a-9a7d-c93be8decda5-kube-api-access-p64fp\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132524 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-serving-cert\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132549 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvk6\" (UniqueName: \"kubernetes.io/projected/28446a05-f08d-465c-ac42-782a677b3511-kube-api-access-qmvk6\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132571 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-ca\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132609 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b470f1c1-e30e-401e-9bba-16d821bd7b4f-apiservice-cert\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132625 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559500a2-4399-42d0-832b-719429dd6fd9-config-volume\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132661 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-service-ca\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.132706 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f2f224-7031-4b81-ba0c-33e9de7311f0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pdwg7\" (UID: \"b5f2f224-7031-4b81-ba0c-33e9de7311f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.134141 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ba3301-a42f-4a20-9434-37e2fca447c9-config\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.134927 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-client-ca\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.135293 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-config\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.135939 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ba3301-a42f-4a20-9434-37e2fca447c9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.137372 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/205e3ea9-e17d-4581-bfc0-eef3cb89dc12-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-c42wt\" (UID: \"205e3ea9-e17d-4581-bfc0-eef3cb89dc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.140024 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-certificates\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.143299 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-trusted-ca\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.143619 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-client\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.144680 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e0b8142-384b-4f00-b129-37115fd01372-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.145126 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b470f1c1-e30e-401e-9bba-16d821bd7b4f-tmpfs\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.146372 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa355f0-0cef-464e-af52-d92cd53c090a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.147457 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-ca\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.147763 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-config\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.148327 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-config\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.152027 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.651991269 +0000 UTC m=+143.652183685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.154191 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/75d31efc-5fc5-4e4e-a893-93161b48002a-etcd-service-ca\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.155203 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b470f1c1-e30e-401e-9bba-16d821bd7b4f-webhook-cert\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.156548 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e0b8142-384b-4f00-b129-37115fd01372-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.157551 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/37f49603-1b09-4a49-8bc5-240b4d10bb46-signing-cabundle\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.157563 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78e9dd5c-8577-4ff2-affa-519a64352ac9-config\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.157918 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28446a05-f08d-465c-ac42-782a677b3511-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.157980 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-serving-cert\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.164444 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fa355f0-0cef-464e-af52-d92cd53c090a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.164957 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2347c841-e571-46f3-b06e-8814dea8d423-metrics-tls\") pod \"dns-operator-744455d44c-tzrgs\" (UID: \"2347c841-e571-46f3-b06e-8814dea8d423\") " pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.165402 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0f78495-0b98-497a-9a7d-c93be8decda5-srv-cert\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.166283 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b470f1c1-e30e-401e-9bba-16d821bd7b4f-apiservice-cert\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.167438 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75d31efc-5fc5-4e4e-a893-93161b48002a-serving-cert\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.168250 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f2f224-7031-4b81-ba0c-33e9de7311f0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pdwg7\" (UID: \"b5f2f224-7031-4b81-ba0c-33e9de7311f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.168656 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0f78495-0b98-497a-9a7d-c93be8decda5-profile-collector-cert\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.168938 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78e9dd5c-8577-4ff2-affa-519a64352ac9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.171546 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28446a05-f08d-465c-ac42-782a677b3511-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.171842 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f51caafd-44cb-4d91-874e-d8fe0221d01d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.173649 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/37f49603-1b09-4a49-8bc5-240b4d10bb46-signing-key\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.176379 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-serving-cert\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.177054 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f51caafd-44cb-4d91-874e-d8fe0221d01d-srv-cert\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.177213 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbtl\" (UniqueName: \"kubernetes.io/projected/b470f1c1-e30e-401e-9bba-16d821bd7b4f-kube-api-access-9bbtl\") pod \"packageserver-d55dfcdfc-psvzs\" (UID: \"b470f1c1-e30e-401e-9bba-16d821bd7b4f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.183228 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ec26ef9-e655-476f-ac27-f27942209b6f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.184693 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac74f2a9-a8bf-4787-b705-1cab5f1c56aa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c9dlf\" (UID: \"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.201452 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g664f\" (UniqueName: \"kubernetes.io/projected/ac74f2a9-a8bf-4787-b705-1cab5f1c56aa-kube-api-access-g664f\") pod \"multus-admission-controller-857f4d67dd-c9dlf\" (UID: \"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.212137 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6fb9c"] Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.219303 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbgkb\" (UniqueName: \"kubernetes.io/projected/6fa355f0-0cef-464e-af52-d92cd53c090a-kube-api-access-zbgkb\") pod \"kube-storage-version-migrator-operator-b67b599dd-5w4kv\" (UID: \"6fa355f0-0cef-464e-af52-d92cd53c090a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.225896 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.227562 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wlpwz"] Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234358 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234477 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-default-certificate\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234497 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/559500a2-4399-42d0-832b-719429dd6fd9-secret-volume\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234514 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234531 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jl4c\" (UniqueName: \"kubernetes.io/projected/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-kube-api-access-6jl4c\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234547 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-config-volume\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234580 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca7fead-bd4b-439e-8c7a-d630325aa36c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234613 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca7fead-bd4b-439e-8c7a-d630325aa36c-proxy-tls\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234628 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z96wv\" (UniqueName: \"kubernetes.io/projected/d78dc70d-59e8-4235-b345-f523b414fb31-kube-api-access-z96wv\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234681 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9c27\" (UniqueName: \"kubernetes.io/projected/eca7fead-bd4b-439e-8c7a-d630325aa36c-kube-api-access-p9c27\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234704 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1a2f8837-0976-4952-b9af-6e16dad9c91f-certs\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234723 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhq6\" (UniqueName: \"kubernetes.io/projected/fb6c50be-9bef-40c5-8c37-ec6435d574ee-kube-api-access-zxhq6\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234738 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23c38e5b-d867-490a-b4d8-b5b4fe00b839-metrics-tls\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234753 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23c38e5b-d867-490a-b4d8-b5b4fe00b839-trusted-ca\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234769 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10da464d-1ee1-401e-8f78-dafaa4d97ea4-cert\") pod \"ingress-canary-mbmmh\" (UID: \"10da464d-1ee1-401e-8f78-dafaa4d97ea4\") " pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234782 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-metrics-tls\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234800 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9cp\" (UniqueName: \"kubernetes.io/projected/f4bb3e14-2008-4060-ac74-cb8d90c6d63f-kube-api-access-fz9cp\") pod \"migrator-59844c95c7-jwmbk\" (UID: \"f4bb3e14-2008-4060-ac74-cb8d90c6d63f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234817 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234936 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-plugins-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.234955 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-stats-auth\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.235093 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-metrics-certs\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.235113 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jwvq\" (UniqueName: \"kubernetes.io/projected/559500a2-4399-42d0-832b-719429dd6fd9-kube-api-access-7jwvq\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.235135 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bws7x\" (UniqueName: \"kubernetes.io/projected/98245824-432e-48b3-8bd1-a8f7cc2e42d2-kube-api-access-bws7x\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.235171 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98245824-432e-48b3-8bd1-a8f7cc2e42d2-service-ca-bundle\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.235287 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb6c50be-9bef-40c5-8c37-ec6435d574ee-proxy-tls\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.235316 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-mountpoint-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.235559 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559500a2-4399-42d0-832b-719429dd6fd9-config-volume\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236186 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb6c50be-9bef-40c5-8c37-ec6435d574ee-images\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236216 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-csi-data-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236231 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-socket-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236374 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23c38e5b-d867-490a-b4d8-b5b4fe00b839-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236399 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgftk\" (UniqueName: \"kubernetes.io/projected/1a2f8837-0976-4952-b9af-6e16dad9c91f-kube-api-access-qgftk\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236417 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jfm\" (UniqueName: \"kubernetes.io/projected/23c38e5b-d867-490a-b4d8-b5b4fe00b839-kube-api-access-c4jfm\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236458 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-registration-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236476 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1a2f8837-0976-4952-b9af-6e16dad9c91f-node-bootstrap-token\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236502 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55r8s\" (UniqueName: \"kubernetes.io/projected/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-kube-api-access-55r8s\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236559 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb6c50be-9bef-40c5-8c37-ec6435d574ee-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.236662 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptjcz\" (UniqueName: \"kubernetes.io/projected/10da464d-1ee1-401e-8f78-dafaa4d97ea4-kube-api-access-ptjcz\") pod \"ingress-canary-mbmmh\" (UID: \"10da464d-1ee1-401e-8f78-dafaa4d97ea4\") " pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.237184 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.737150934 +0000 UTC m=+143.737343340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.239690 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-registration-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.239946 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-socket-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.240781 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb6c50be-9bef-40c5-8c37-ec6435d574ee-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.242433 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23c38e5b-d867-490a-b4d8-b5b4fe00b839-trusted-ca\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.245534 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-csi-data-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.246783 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98245824-432e-48b3-8bd1-a8f7cc2e42d2-service-ca-bundle\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.246946 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.251134 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-mountpoint-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.251810 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559500a2-4399-42d0-832b-719429dd6fd9-config-volume\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.252215 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb6c50be-9bef-40c5-8c37-ec6435d574ee-images\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.252530 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb6c50be-9bef-40c5-8c37-ec6435d574ee-proxy-tls\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.252600 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d78dc70d-59e8-4235-b345-f523b414fb31-plugins-dir\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.253143 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eca7fead-bd4b-439e-8c7a-d630325aa36c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.254460 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-config-volume\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.258339 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/559500a2-4399-42d0-832b-719429dd6fd9-secret-volume\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.261842 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/23c38e5b-d867-490a-b4d8-b5b4fe00b839-metrics-tls\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.262857 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1a2f8837-0976-4952-b9af-6e16dad9c91f-certs\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.263910 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1a2f8837-0976-4952-b9af-6e16dad9c91f-node-bootstrap-token\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.268319 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-metrics-certs\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.268478 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrxbc\" (UniqueName: \"kubernetes.io/projected/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-kube-api-access-rrxbc\") pod \"route-controller-manager-6576b87f9c-rqqd5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: W0123 16:55:09.269755 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f6f9174_b914_45f1_b60c_7c88dfed6c14.slice/crio-2516f52a22d2d64d3d31037aa2f921a20e0ab481949ae38be5a1f791fbcf96b1 WatchSource:0}: Error finding container 2516f52a22d2d64d3d31037aa2f921a20e0ab481949ae38be5a1f791fbcf96b1: Status 404 returned error can't find the container with id 2516f52a22d2d64d3d31037aa2f921a20e0ab481949ae38be5a1f791fbcf96b1 Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.270343 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10da464d-1ee1-401e-8f78-dafaa4d97ea4-cert\") pod \"ingress-canary-mbmmh\" (UID: \"10da464d-1ee1-401e-8f78-dafaa4d97ea4\") " pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.271217 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.271844 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-stats-auth\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.272203 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/98245824-432e-48b3-8bd1-a8f7cc2e42d2-default-certificate\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.273261 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eca7fead-bd4b-439e-8c7a-d630325aa36c-proxy-tls\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.274985 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vndf2\" (UniqueName: \"kubernetes.io/projected/205e3ea9-e17d-4581-bfc0-eef3cb89dc12-kube-api-access-vndf2\") pod \"package-server-manager-789f6589d5-c42wt\" (UID: \"205e3ea9-e17d-4581-bfc0-eef3cb89dc12\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.277696 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.279163 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54nk8\" (UniqueName: \"kubernetes.io/projected/f51caafd-44cb-4d91-874e-d8fe0221d01d-kube-api-access-54nk8\") pod \"olm-operator-6b444d44fb-wc5zr\" (UID: \"f51caafd-44cb-4d91-874e-d8fe0221d01d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.285783 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.288617 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.289948 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-metrics-tls\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.334690 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p64fp\" (UniqueName: \"kubernetes.io/projected/f0f78495-0b98-497a-9a7d-c93be8decda5-kube-api-access-p64fp\") pod \"catalog-operator-68c6474976-j6ms6\" (UID: \"f0f78495-0b98-497a-9a7d-c93be8decda5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.336783 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6b4\" (UniqueName: \"kubernetes.io/projected/2347c841-e571-46f3-b06e-8814dea8d423-kube-api-access-zt6b4\") pod \"dns-operator-744455d44c-tzrgs\" (UID: \"2347c841-e571-46f3-b06e-8814dea8d423\") " pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.342228 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.342774 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.842729924 +0000 UTC m=+143.842922330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.358718 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ddvp8"] Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.360842 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dszg2\" (UniqueName: \"kubernetes.io/projected/75d31efc-5fc5-4e4e-a893-93161b48002a-kube-api-access-dszg2\") pod \"etcd-operator-b45778765-vjv2l\" (UID: \"75d31efc-5fc5-4e4e-a893-93161b48002a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.371135 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" event={"ID":"9e9e2c8c-5d03-48b0-ab9c-110eeef212b5","Type":"ContainerStarted","Data":"471a783cc90bf0f673af5ff3c9b589245df8833edca4df22e7a5a539cd09e0e6"} Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.371430 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.375863 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75ba3301-a42f-4a20-9434-37e2fca447c9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-qwjhc\" (UID: \"75ba3301-a42f-4a20-9434-37e2fca447c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.392634 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgdkw\" (UniqueName: \"kubernetes.io/projected/bf6cc1ab-a0f6-4d93-9c0d-47605134d170-kube-api-access-fgdkw\") pod \"service-ca-operator-777779d784-lxbdm\" (UID: \"bf6cc1ab-a0f6-4d93-9c0d-47605134d170\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.392836 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.416223 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" event={"ID":"4424c8ad-c7cc-418f-a322-ef8764b076ca","Type":"ContainerStarted","Data":"49b253f5bfa59b543202835511fa47a32ca2a0a6b2b8a450f21a3fa6dc734c1d"} Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.425800 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvk6\" (UniqueName: \"kubernetes.io/projected/28446a05-f08d-465c-ac42-782a677b3511-kube-api-access-qmvk6\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmw5q\" (UID: \"28446a05-f08d-465c-ac42-782a677b3511\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.432804 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" event={"ID":"6f6f9174-b914-45f1-b60c-7c88dfed6c14","Type":"ContainerStarted","Data":"2516f52a22d2d64d3d31037aa2f921a20e0ab481949ae38be5a1f791fbcf96b1"} Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.443562 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.443893 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.943729267 +0000 UTC m=+143.943921673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.444101 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.444418 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:09.94441165 +0000 UTC m=+143.944604056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.445162 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78e9dd5c-8577-4ff2-affa-519a64352ac9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-q8fqr\" (UID: \"78e9dd5c-8577-4ff2-affa-519a64352ac9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.460895 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.491343 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgqcs\" (UniqueName: \"kubernetes.io/projected/b5f2f224-7031-4b81-ba0c-33e9de7311f0-kube-api-access-mgqcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-pdwg7\" (UID: \"b5f2f224-7031-4b81-ba0c-33e9de7311f0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.495819 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5d2p\" (UniqueName: \"kubernetes.io/projected/37f49603-1b09-4a49-8bc5-240b4d10bb46-kube-api-access-m5d2p\") pod \"service-ca-9c57cc56f-krwqd\" (UID: \"37f49603-1b09-4a49-8bc5-240b4d10bb46\") " pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.501662 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" event={"ID":"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9","Type":"ContainerStarted","Data":"34a4bfa465f22df77cd832df595405b628158d3b2d6b88efaa4ac30d151dd2fb"} Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.503866 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.513809 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" event={"ID":"f33f2b3a-d022-41f9-9a1f-b8bb130128c7","Type":"ContainerStarted","Data":"26313c8083aa632cc4e6b65954aafab364c89bc00634ec801cbd3cd54b1901de"} Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.514888 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wlpwz" event={"ID":"767b5dce-2e44-401a-b7af-3cb97ff48f69","Type":"ContainerStarted","Data":"6c0236bbc5128442f5a4413c16afd38848c30df5a67b13949662ddd2a63e529e"} Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.521214 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzphp\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-kube-api-access-rzphp\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.538857 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e0b8142-384b-4f00-b129-37115fd01372-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-44sgl\" (UID: \"8e0b8142-384b-4f00-b129-37115fd01372\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.542600 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.546865 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.547673 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.047658329 +0000 UTC m=+144.047850735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.553190 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.562443 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" event={"ID":"f542164e-05c2-4746-988b-0b4676772078","Type":"ContainerStarted","Data":"2415c450d942872f3918f80fea9a1a63de4308a39c93bd4c9a76ea8e687cf21f"} Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.563704 4606 patch_prober.go:28] interesting pod/downloads-7954f5f757-stn97 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.563736 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-stn97" podUID="5f5bac36-b78c-4f29-b08e-419fe3010b57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.565680 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2b7xg"] Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.566554 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.569045 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.579654 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23c38e5b-d867-490a-b4d8-b5b4fe00b839-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.599279 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-bound-sa-token\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.599376 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.599498 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.608601 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgftk\" (UniqueName: \"kubernetes.io/projected/1a2f8837-0976-4952-b9af-6e16dad9c91f-kube-api-access-qgftk\") pod \"machine-config-server-8fjr8\" (UID: \"1a2f8837-0976-4952-b9af-6e16dad9c91f\") " pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.609812 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jfm\" (UniqueName: \"kubernetes.io/projected/23c38e5b-d867-490a-b4d8-b5b4fe00b839-kube-api-access-c4jfm\") pod \"ingress-operator-5b745b69d9-6lv6t\" (UID: \"23c38e5b-d867-490a-b4d8-b5b4fe00b839\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.634627 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55r8s\" (UniqueName: \"kubernetes.io/projected/be1d04b8-395a-4c61-9d3f-d6caf82a8a20-kube-api-access-55r8s\") pod \"dns-default-hms7g\" (UID: \"be1d04b8-395a-4c61-9d3f-d6caf82a8a20\") " pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.649318 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.650346 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.15033344 +0000 UTC m=+144.150525846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.658876 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.665021 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhq6\" (UniqueName: \"kubernetes.io/projected/fb6c50be-9bef-40c5-8c37-ec6435d574ee-kube-api-access-zxhq6\") pod \"machine-config-operator-74547568cd-v2w4q\" (UID: \"fb6c50be-9bef-40c5-8c37-ec6435d574ee\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.666622 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jwvq\" (UniqueName: \"kubernetes.io/projected/559500a2-4399-42d0-832b-719429dd6fd9-kube-api-access-7jwvq\") pod \"collect-profiles-29486445-rsdm2\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.679499 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.680772 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.684695 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bws7x\" (UniqueName: \"kubernetes.io/projected/98245824-432e-48b3-8bd1-a8f7cc2e42d2-kube-api-access-bws7x\") pod \"router-default-5444994796-vrcsb\" (UID: \"98245824-432e-48b3-8bd1-a8f7cc2e42d2\") " pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.686090 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.702011 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.702476 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8fjr8" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.711282 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9cp\" (UniqueName: \"kubernetes.io/projected/f4bb3e14-2008-4060-ac74-cb8d90c6d63f-kube-api-access-fz9cp\") pod \"migrator-59844c95c7-jwmbk\" (UID: \"f4bb3e14-2008-4060-ac74-cb8d90c6d63f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.717997 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.721799 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptjcz\" (UniqueName: \"kubernetes.io/projected/10da464d-1ee1-401e-8f78-dafaa4d97ea4-kube-api-access-ptjcz\") pod \"ingress-canary-mbmmh\" (UID: \"10da464d-1ee1-401e-8f78-dafaa4d97ea4\") " pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.747241 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z96wv\" (UniqueName: \"kubernetes.io/projected/d78dc70d-59e8-4235-b345-f523b414fb31-kube-api-access-z96wv\") pod \"csi-hostpathplugin-n5jq4\" (UID: \"d78dc70d-59e8-4235-b345-f523b414fb31\") " pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.754582 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.754905 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.254889884 +0000 UTC m=+144.255082290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.756506 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.787196 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jl4c\" (UniqueName: \"kubernetes.io/projected/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-kube-api-access-6jl4c\") pod \"marketplace-operator-79b997595-fzn5r\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.801956 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9c27\" (UniqueName: \"kubernetes.io/projected/eca7fead-bd4b-439e-8c7a-d630325aa36c-kube-api-access-p9c27\") pod \"machine-config-controller-84d6567774-ttzvm\" (UID: \"eca7fead-bd4b-439e-8c7a-d630325aa36c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.815800 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b"] Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.860948 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.861229 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.361213367 +0000 UTC m=+144.361405773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.892515 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5"] Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.913780 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.930663 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.943780 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.958486 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.962380 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:09 crc kubenswrapper[4606]: E0123 16:55:09.964827 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.464812629 +0000 UTC m=+144.465005035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.982922 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" Jan 23 16:55:09 crc kubenswrapper[4606]: I0123 16:55:09.994653 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c9dlf"] Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.006830 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tzrgs"] Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.007952 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mbmmh" Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.030865 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.068289 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.068870 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.568858935 +0000 UTC m=+144.569051341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.100705 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-stn97" podStartSLOduration=122.100689735 podStartE2EDuration="2m2.100689735s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:10.062419378 +0000 UTC m=+144.062611794" watchObservedRunningTime="2026-01-23 16:55:10.100689735 +0000 UTC m=+144.100882141" Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.100878 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs"] Jan 23 16:55:10 crc kubenswrapper[4606]: W0123 16:55:10.143975 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac74f2a9_a8bf_4787_b705_1cab5f1c56aa.slice/crio-50271b4f7a28eb021b43c0e2595b9e1d4fd34965ba7cc418518de617080054fd WatchSource:0}: Error finding container 50271b4f7a28eb021b43c0e2595b9e1d4fd34965ba7cc418518de617080054fd: Status 404 returned error can't find the container with id 50271b4f7a28eb021b43c0e2595b9e1d4fd34965ba7cc418518de617080054fd Jan 23 16:55:10 crc kubenswrapper[4606]: W0123 16:55:10.162840 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb470f1c1_e30e_401e_9bba_16d821bd7b4f.slice/crio-5f7647c12c7ce7f3c8726a538e841fc0b725418f011c8c266de3765d413f0f50 WatchSource:0}: Error finding container 5f7647c12c7ce7f3c8726a538e841fc0b725418f011c8c266de3765d413f0f50: Status 404 returned error can't find the container with id 5f7647c12c7ce7f3c8726a538e841fc0b725418f011c8c266de3765d413f0f50 Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.169072 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.169382 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.669357552 +0000 UTC m=+144.669549958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.252210 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" podStartSLOduration=123.25219533 podStartE2EDuration="2m3.25219533s" podCreationTimestamp="2026-01-23 16:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:10.250752581 +0000 UTC m=+144.250944987" watchObservedRunningTime="2026-01-23 16:55:10.25219533 +0000 UTC m=+144.252387736" Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.267623 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv"] Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.272602 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.272968 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.772953193 +0000 UTC m=+144.773145599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.378882 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.379409 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.879388461 +0000 UTC m=+144.879580867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.421405 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc"] Jan 23 16:55:10 crc kubenswrapper[4606]: W0123 16:55:10.426212 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98245824_432e_48b3_8bd1_a8f7cc2e42d2.slice/crio-141552ca16401bbf796907e5190a7cce22544b615b7df5accff6605043ecbb98 WatchSource:0}: Error finding container 141552ca16401bbf796907e5190a7cce22544b615b7df5accff6605043ecbb98: Status 404 returned error can't find the container with id 141552ca16401bbf796907e5190a7cce22544b615b7df5accff6605043ecbb98 Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.484372 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.484835 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:10.984808665 +0000 UTC m=+144.985001071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.588218 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.588672 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.088656615 +0000 UTC m=+145.088849021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.692572 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.692913 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.192900838 +0000 UTC m=+145.193093244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.697253 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" event={"ID":"6fa355f0-0cef-464e-af52-d92cd53c090a","Type":"ContainerStarted","Data":"d66ad332fda38dd9de76bb7417b27af30a40128baa6902472a18eb44b4db5599"} Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.714797 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" event={"ID":"eecab59c-60d2-4a32-955c-dbf919399422","Type":"ContainerStarted","Data":"84c5325ac927ed9eb409a2db7a42d092f5fd31487eb65e422b1f733247e7aec9"} Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.724873 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" event={"ID":"b470f1c1-e30e-401e-9bba-16d821bd7b4f","Type":"ContainerStarted","Data":"5f7647c12c7ce7f3c8726a538e841fc0b725418f011c8c266de3765d413f0f50"} Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.732173 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8fjr8" event={"ID":"1a2f8837-0976-4952-b9af-6e16dad9c91f","Type":"ContainerStarted","Data":"7f679ff5a6860b7953695ec32d91081ebafee8927b64879bc0e779bc922b3b41"} Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.789452 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" event={"ID":"2347c841-e571-46f3-b06e-8814dea8d423","Type":"ContainerStarted","Data":"ee96265c6e5ed8255ed350dfe576b76c4fc100816d97c77ffcb4b25053a2cc97"} Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.794349 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.794756 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.2947427 +0000 UTC m=+145.294935106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.807978 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" podStartSLOduration=122.807960538 podStartE2EDuration="2m2.807960538s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:10.749118103 +0000 UTC m=+144.749310509" watchObservedRunningTime="2026-01-23 16:55:10.807960538 +0000 UTC m=+144.808152944" Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.879834 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" event={"ID":"f33f2b3a-d022-41f9-9a1f-b8bb130128c7","Type":"ContainerStarted","Data":"13ed20f7b4a0c5ed743b668f6f13b3b33440fe9b268c395c3320035cf47e19c5"} Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.895932 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" event={"ID":"6f6f9174-b914-45f1-b60c-7c88dfed6c14","Type":"ContainerStarted","Data":"3e9ba2387821a96858db1aae6d8d37e99adcab95dcf5ddb2a47ed6acc78cabd3"} Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.896049 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:10 crc kubenswrapper[4606]: E0123 16:55:10.896427 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.396416196 +0000 UTC m=+145.396608602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.896713 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.897580 4606 patch_prober.go:28] interesting pod/console-operator-58897d9998-6fb9c container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.897643 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" podUID="6f6f9174-b914-45f1-b60c-7c88dfed6c14" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.959928 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt"] Jan 23 16:55:10 crc kubenswrapper[4606]: I0123 16:55:10.983438 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr"] Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.003405 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-p7fm6" podStartSLOduration=123.003388612 podStartE2EDuration="2m3.003388612s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.001111535 +0000 UTC m=+145.001303951" watchObservedRunningTime="2026-01-23 16:55:11.003388612 +0000 UTC m=+145.003581018" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.007499 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.011449 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.511411134 +0000 UTC m=+145.511603540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.018261 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm"] Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.081181 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wlpwz" event={"ID":"767b5dce-2e44-401a-b7af-3cb97ff48f69","Type":"ContainerStarted","Data":"84f4772f1832e0f838069fee98134d3f93ed262ef14fcf051711ed3264b99eca"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.091248 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mjj29" podStartSLOduration=123.09122596 podStartE2EDuration="2m3.09122596s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.066935406 +0000 UTC m=+145.067127812" watchObservedRunningTime="2026-01-23 16:55:11.09122596 +0000 UTC m=+145.091418366" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.110960 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.111688 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.611675342 +0000 UTC m=+145.611867748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.132627 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" event={"ID":"2fa086b5-8528-4fa7-a649-1722d7e8cdf5","Type":"ContainerStarted","Data":"e8f2c8b8074361328ee980b72220b5ebf3b5cc8f8cb9e13894a8714edbb94209"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.137722 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" event={"ID":"94efdf3d-0abf-4b5d-944f-efd289975ee1","Type":"ContainerStarted","Data":"0dd5e97f2198d071bed102533e4a8c60ffef363eee96c96e6407ef51d541e652"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.190853 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6"] Jan 23 16:55:11 crc kubenswrapper[4606]: W0123 16:55:11.198917 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod205e3ea9_e17d_4581_bfc0_eef3cb89dc12.slice/crio-4c79f7e946f94bc1ac68933c9258dbe0936d707caa77fcea5dfbe4fd06dcb9ee WatchSource:0}: Error finding container 4c79f7e946f94bc1ac68933c9258dbe0936d707caa77fcea5dfbe4fd06dcb9ee: Status 404 returned error can't find the container with id 4c79f7e946f94bc1ac68933c9258dbe0936d707caa77fcea5dfbe4fd06dcb9ee Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.213165 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.213420 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" podStartSLOduration=122.213402821 podStartE2EDuration="2m2.213402821s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.211388492 +0000 UTC m=+145.211580898" watchObservedRunningTime="2026-01-23 16:55:11.213402821 +0000 UTC m=+145.213595227" Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.214011 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.71398141 +0000 UTC m=+145.714173846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.224056 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" event={"ID":"f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9","Type":"ContainerStarted","Data":"70aed63944c2eaec05c79bc9537ac201e09b0759468eb816df710e07a945054b"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.277601 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vrcsb" event={"ID":"98245824-432e-48b3-8bd1-a8f7cc2e42d2","Type":"ContainerStarted","Data":"141552ca16401bbf796907e5190a7cce22544b615b7df5accff6605043ecbb98"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.297493 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" event={"ID":"d53dfb30-7e32-472e-8602-16c28f0a2b03","Type":"ContainerStarted","Data":"d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.297532 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" event={"ID":"d53dfb30-7e32-472e-8602-16c28f0a2b03","Type":"ContainerStarted","Data":"aa0daf2c6da81647c15ed6ea14dbef6fe3cb8102f053f1d2c482e1a5337764c3"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.298332 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.322470 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" event={"ID":"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa","Type":"ContainerStarted","Data":"50271b4f7a28eb021b43c0e2595b9e1d4fd34965ba7cc418518de617080054fd"} Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.323318 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.323666 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.823654508 +0000 UTC m=+145.823846914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.333468 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.372311 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.381362 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" podStartSLOduration=124.381342923 podStartE2EDuration="2m4.381342923s" podCreationTimestamp="2026-01-23 16:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.379702298 +0000 UTC m=+145.379894714" watchObservedRunningTime="2026-01-23 16:55:11.381342923 +0000 UTC m=+145.381535329" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.432890 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.436138 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t"] Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.436539 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:11.936522323 +0000 UTC m=+145.936714729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: W0123 16:55:11.447136 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f78495_0b98_497a_9a7d_c93be8decda5.slice/crio-02933edeca9e163bab42c7a6d26884828b162799cb92e1c502e13ba3eaee9def WatchSource:0}: Error finding container 02933edeca9e163bab42c7a6d26884828b162799cb92e1c502e13ba3eaee9def: Status 404 returned error can't find the container with id 02933edeca9e163bab42c7a6d26884828b162799cb92e1c502e13ba3eaee9def Jan 23 16:55:11 crc kubenswrapper[4606]: W0123 16:55:11.472924 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51caafd_44cb_4d91_874e_d8fe0221d01d.slice/crio-4c58f2fe0ad9f7ef347623cf2ab20d4cce7bcb43d349b8ec7a566bbb7e41e748 WatchSource:0}: Error finding container 4c58f2fe0ad9f7ef347623cf2ab20d4cce7bcb43d349b8ec7a566bbb7e41e748: Status 404 returned error can't find the container with id 4c58f2fe0ad9f7ef347623cf2ab20d4cce7bcb43d349b8ec7a566bbb7e41e748 Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.511968 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" podStartSLOduration=123.51194989 podStartE2EDuration="2m3.51194989s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.473460386 +0000 UTC m=+145.473652782" watchObservedRunningTime="2026-01-23 16:55:11.51194989 +0000 UTC m=+145.512142306" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.515941 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" podStartSLOduration=123.515928805 podStartE2EDuration="2m3.515928805s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.508264714 +0000 UTC m=+145.508457120" watchObservedRunningTime="2026-01-23 16:55:11.515928805 +0000 UTC m=+145.516121211" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.539503 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.543283 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.043270391 +0000 UTC m=+146.043462797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.555875 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7"] Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.570272 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" podStartSLOduration=124.570253526 podStartE2EDuration="2m4.570253526s" podCreationTimestamp="2026-01-23 16:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.559104308 +0000 UTC m=+145.559296714" watchObservedRunningTime="2026-01-23 16:55:11.570253526 +0000 UTC m=+145.570445932" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.642077 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.642443 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.142419132 +0000 UTC m=+146.142611528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.648369 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wlpwz" podStartSLOduration=123.648349254 podStartE2EDuration="2m3.648349254s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:11.644447221 +0000 UTC m=+145.644639627" watchObservedRunningTime="2026-01-23 16:55:11.648349254 +0000 UTC m=+145.648541660" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.649054 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.655681 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.668785 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.668828 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.675529 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.748693 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.749257 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.249241003 +0000 UTC m=+146.249433409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.850459 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.850993 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.350972901 +0000 UTC m=+146.351165307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.904807 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vjv2l"] Jan 23 16:55:11 crc kubenswrapper[4606]: I0123 16:55:11.957181 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:11 crc kubenswrapper[4606]: E0123 16:55:11.966210 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.466193407 +0000 UTC m=+146.466385813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.057828 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.058121 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.558106202 +0000 UTC m=+146.558298608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: W0123 16:55:12.154686 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75d31efc_5fc5_4e4e_a893_93161b48002a.slice/crio-1421eb57e92655f82ac86154fd524f447f775035eeefcf17b31105e184c5fbcb WatchSource:0}: Error finding container 1421eb57e92655f82ac86154fd524f447f775035eeefcf17b31105e184c5fbcb: Status 404 returned error can't find the container with id 1421eb57e92655f82ac86154fd524f447f775035eeefcf17b31105e184c5fbcb Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.159757 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.160074 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.660060928 +0000 UTC m=+146.660253334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.262143 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.262216 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr"] Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.262574 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.762556182 +0000 UTC m=+146.762748588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.363689 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.364399 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.864362363 +0000 UTC m=+146.864554769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.387000 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.406978 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" event={"ID":"f0f78495-0b98-497a-9a7d-c93be8decda5","Type":"ContainerStarted","Data":"02933edeca9e163bab42c7a6d26884828b162799cb92e1c502e13ba3eaee9def"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.407548 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.437103 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-krwqd"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.450034 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" event={"ID":"75ba3301-a42f-4a20-9434-37e2fca447c9","Type":"ContainerStarted","Data":"68df01bd03fcc3c7875d6009a5c9ff738295b53329361a3d5a1f2e1799786b31"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.468618 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.469007 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.968823204 +0000 UTC m=+146.969015660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.469205 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.469783 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl"] Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.485314 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:12.985279941 +0000 UTC m=+146.985472337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.488559 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" event={"ID":"94efdf3d-0abf-4b5d-944f-efd289975ee1","Type":"ContainerStarted","Data":"da980775b70ce0f5e7829620337617c1a026434e5cb5485699396ea7f4f9c2d0"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.513525 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" event={"ID":"bf6cc1ab-a0f6-4d93-9c0d-47605134d170","Type":"ContainerStarted","Data":"a890869e6b2dbcd8471be1032dc2cb8774f904506c19b1b4a57f82c9abe15cd7"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.513575 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" event={"ID":"bf6cc1ab-a0f6-4d93-9c0d-47605134d170","Type":"ContainerStarted","Data":"93076626d9e7f666a86a0c3f75de3cee88109b2aa25cb95ca9db6631419569cf"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.544953 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" event={"ID":"2fa086b5-8528-4fa7-a649-1722d7e8cdf5","Type":"ContainerStarted","Data":"9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.546315 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.547678 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fbv4b" podStartSLOduration=125.54766746600001 podStartE2EDuration="2m5.547667466s" podCreationTimestamp="2026-01-23 16:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:12.544999585 +0000 UTC m=+146.545192001" watchObservedRunningTime="2026-01-23 16:55:12.547667466 +0000 UTC m=+146.547859872" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.562273 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" event={"ID":"75d31efc-5fc5-4e4e-a893-93161b48002a","Type":"ContainerStarted","Data":"1421eb57e92655f82ac86154fd524f447f775035eeefcf17b31105e184c5fbcb"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.570018 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.570431 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.070415777 +0000 UTC m=+147.070608183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.631177 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" event={"ID":"205e3ea9-e17d-4581-bfc0-eef3cb89dc12","Type":"ContainerStarted","Data":"ce8ba40b8e8f319765269301ea25659745c8d5a4142fedb8301406df523689d4"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.631225 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" event={"ID":"205e3ea9-e17d-4581-bfc0-eef3cb89dc12","Type":"ContainerStarted","Data":"4c79f7e946f94bc1ac68933c9258dbe0936d707caa77fcea5dfbe4fd06dcb9ee"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.640287 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzn5r"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.672747 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.673967 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.173955086 +0000 UTC m=+147.174147492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.686551 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" event={"ID":"2347c841-e571-46f3-b06e-8814dea8d423","Type":"ContainerStarted","Data":"5e069a51fe16e805f82d82836dfa5375599c081f1f55060bbdae153e956285e6"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.758534 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxbdm" podStartSLOduration=123.758515122 podStartE2EDuration="2m3.758515122s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:12.656992462 +0000 UTC m=+146.657184878" watchObservedRunningTime="2026-01-23 16:55:12.758515122 +0000 UTC m=+146.758707528" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.768073 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mbmmh"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.774370 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.774862 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.274844956 +0000 UTC m=+147.275037362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.776837 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" podStartSLOduration=123.776813732 podStartE2EDuration="2m3.776813732s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:12.758254944 +0000 UTC m=+146.758447350" watchObservedRunningTime="2026-01-23 16:55:12.776813732 +0000 UTC m=+146.777006138" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.781675 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" event={"ID":"f33f2b3a-d022-41f9-9a1f-b8bb130128c7","Type":"ContainerStarted","Data":"bef79e4f6376bcb0d621ce3ee82aeefbfd5e37977f605e1a76a01791bc39fa68"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.812507 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" event={"ID":"f51caafd-44cb-4d91-874e-d8fe0221d01d","Type":"ContainerStarted","Data":"4c58f2fe0ad9f7ef347623cf2ab20d4cce7bcb43d349b8ec7a566bbb7e41e748"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.813477 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.817731 4606 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wc5zr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.817851 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" podUID="f51caafd-44cb-4d91-874e-d8fe0221d01d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.847325 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8fjr8" event={"ID":"1a2f8837-0976-4952-b9af-6e16dad9c91f","Type":"ContainerStarted","Data":"b626e6c2b791431df93cada3ae248d5a593b4cf11c1a0e33090ab8157d027a0a"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.866685 4606 patch_prober.go:28] interesting pod/apiserver-76f77b778f-psdkv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]log ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]etcd ok Jan 23 16:55:12 crc kubenswrapper[4606]: [-]poststarthook/start-apiserver-admission-initializer failed: reason withheld Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/max-in-flight-filter ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 16:55:12 crc kubenswrapper[4606]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 16:55:12 crc kubenswrapper[4606]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-startinformers ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 16:55:12 crc kubenswrapper[4606]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 16:55:12 crc kubenswrapper[4606]: livez check failed Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.866740 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" podUID="f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.887833 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" event={"ID":"6fa355f0-0cef-464e-af52-d92cd53c090a","Type":"ContainerStarted","Data":"2c75ead527f6f130dfabbe2d4ec2d62ec07512a683f2da9a0bf87eb45ff87472"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.888728 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.889822 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.389789052 +0000 UTC m=+147.389981458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.913311 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vrcsb" event={"ID":"98245824-432e-48b3-8bd1-a8f7cc2e42d2","Type":"ContainerStarted","Data":"5e393af6803d4f3d862f741828f9fc845268740803f080abdf5b2507a7534fbc"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.917643 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hms7g"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.934675 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.953894 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n5jq4"] Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.970974 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" event={"ID":"b470f1c1-e30e-401e-9bba-16d821bd7b4f","Type":"ContainerStarted","Data":"c961e321617d426e61acb1d824b5976f4c12247fa044f20e69c2f2d3095aea35"} Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.971423 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.984075 4606 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-psvzs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.984122 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" podUID="b470f1c1-e30e-401e-9bba-16d821bd7b4f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Jan 23 16:55:12 crc kubenswrapper[4606]: W0123 16:55:12.984208 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe1d04b8_395a_4c61_9d3f_d6caf82a8a20.slice/crio-bfe9e60c1af889db8caf8237007aa352ce7a4850613624627eb24dc843efbf35 WatchSource:0}: Error finding container bfe9e60c1af889db8caf8237007aa352ce7a4850613624627eb24dc843efbf35: Status 404 returned error can't find the container with id bfe9e60c1af889db8caf8237007aa352ce7a4850613624627eb24dc843efbf35 Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.993060 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:12 crc kubenswrapper[4606]: I0123 16:55:12.994071 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nqpmj" podStartSLOduration=125.994055507 podStartE2EDuration="2m5.994055507s" podCreationTimestamp="2026-01-23 16:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:12.978734647 +0000 UTC m=+146.978927053" watchObservedRunningTime="2026-01-23 16:55:12.994055507 +0000 UTC m=+146.994247913" Jan 23 16:55:12 crc kubenswrapper[4606]: E0123 16:55:12.994639 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.494621665 +0000 UTC m=+147.494814071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.042871 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" event={"ID":"b5f2f224-7031-4b81-ba0c-33e9de7311f0","Type":"ContainerStarted","Data":"4c16efa0be1be7cc6af51249df53c7fe1b951881c7ad3242ea62d8e8f736158e"} Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.065797 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" event={"ID":"23c38e5b-d867-490a-b4d8-b5b4fe00b839","Type":"ContainerStarted","Data":"dba1ba5e9eed25792ec29229515cffcd7fee18b4e29fcd53e8da4b6936334862"} Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.065910 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.067150 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" podStartSLOduration=124.067094432 podStartE2EDuration="2m4.067094432s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:13.047556 +0000 UTC m=+147.047748406" watchObservedRunningTime="2026-01-23 16:55:13.067094432 +0000 UTC m=+147.067286858" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.097146 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.098433 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm"] Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.098780 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.598766995 +0000 UTC m=+147.598959471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.117967 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vrcsb" podStartSLOduration=125.117946756 podStartE2EDuration="2m5.117946756s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:13.116173806 +0000 UTC m=+147.116366222" watchObservedRunningTime="2026-01-23 16:55:13.117946756 +0000 UTC m=+147.118139162" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.119963 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk"] Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.181814 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" event={"ID":"eecab59c-60d2-4a32-955c-dbf919399422","Type":"ContainerStarted","Data":"fc7bc01fccb6e6a71bc80752b1a712409d084512f184d0467948bb9679cc0289"} Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.200614 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.201931 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.701903681 +0000 UTC m=+147.702096087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.209986 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6fb9c" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.213167 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.246028 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" podStartSLOduration=124.246011577 podStartE2EDuration="2m4.246011577s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:13.244912949 +0000 UTC m=+147.245105355" watchObservedRunningTime="2026-01-23 16:55:13.246011577 +0000 UTC m=+147.246203983" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.246231 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8fjr8" podStartSLOduration=7.246227284 podStartE2EDuration="7.246227284s" podCreationTimestamp="2026-01-23 16:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:13.17857307 +0000 UTC m=+147.178765476" watchObservedRunningTime="2026-01-23 16:55:13.246227284 +0000 UTC m=+147.246419690" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.271888 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.286796 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5w4kv" podStartSLOduration=124.286776558 podStartE2EDuration="2m4.286776558s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:13.283632731 +0000 UTC m=+147.283825147" watchObservedRunningTime="2026-01-23 16:55:13.286776558 +0000 UTC m=+147.286968964" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.302649 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.306110 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.806093432 +0000 UTC m=+147.806285838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.332190 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" podStartSLOduration=124.332173497 podStartE2EDuration="2m4.332173497s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:13.330135618 +0000 UTC m=+147.330328054" watchObservedRunningTime="2026-01-23 16:55:13.332173497 +0000 UTC m=+147.332365903" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.374639 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" podStartSLOduration=124.374571104 podStartE2EDuration="2m4.374571104s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:13.37237554 +0000 UTC m=+147.372567946" watchObservedRunningTime="2026-01-23 16:55:13.374571104 +0000 UTC m=+147.374763510" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.407251 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.408293 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:13.908276686 +0000 UTC m=+147.908469092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.508736 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.509102 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.009088243 +0000 UTC m=+148.009280649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.612743 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.612888 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.112868072 +0000 UTC m=+148.113060478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.613194 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.613661 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.113648887 +0000 UTC m=+148.113841303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.687747 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.693833 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:13 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:13 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:13 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.693879 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.715040 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.715393 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.215367376 +0000 UTC m=+148.215559782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.715562 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.715929 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.215911824 +0000 UTC m=+148.216104230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.772341 4606 csr.go:261] certificate signing request csr-4nq45 is approved, waiting to be issued Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.792065 4606 csr.go:257] certificate signing request csr-4nq45 is issued Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.820191 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.820665 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.320645944 +0000 UTC m=+148.320838360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:13 crc kubenswrapper[4606]: I0123 16:55:13.921296 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:13 crc kubenswrapper[4606]: E0123 16:55:13.921655 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.421640487 +0000 UTC m=+148.421832893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.133505 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.133799 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.633783948 +0000 UTC m=+148.633976354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.255927 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.256276 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.756259069 +0000 UTC m=+148.756451505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.278576 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" event={"ID":"8e0b8142-384b-4f00-b129-37115fd01372","Type":"ContainerStarted","Data":"f28716736fe9631be0c0417be944fb300a3435446f64a1d82f9f4e18060c9134"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.278638 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" event={"ID":"8e0b8142-384b-4f00-b129-37115fd01372","Type":"ContainerStarted","Data":"4f4752573134b0a746feb6d86a8b3dc8426e253a6f68a08118bb79840b3370c4"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.293714 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" event={"ID":"37f49603-1b09-4a49-8bc5-240b4d10bb46","Type":"ContainerStarted","Data":"b32490b321d1e6f5ce764b34120cf143e2d08af3f8346263b7cff2bdda42871a"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.293758 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" event={"ID":"37f49603-1b09-4a49-8bc5-240b4d10bb46","Type":"ContainerStarted","Data":"d20cf2d1be3f3e3deb464212eeedd3a8cb0509b8988ecc2da44a1b73bf9947cd"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.363071 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.363300 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" event={"ID":"23c38e5b-d867-490a-b4d8-b5b4fe00b839","Type":"ContainerStarted","Data":"b7c71b21302580b0080f979995b1f938224cb95fbf31fcbf48d560588d76cd4a"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.363346 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" event={"ID":"23c38e5b-d867-490a-b4d8-b5b4fe00b839","Type":"ContainerStarted","Data":"674d76c8e5687c8efaa221cc0632adf5e3df77a6b5867f83a09542f37fc19770"} Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.363908 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.863888217 +0000 UTC m=+148.864080623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.399185 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-44sgl" podStartSLOduration=126.399162203 podStartE2EDuration="2m6.399162203s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:14.364509328 +0000 UTC m=+148.364701734" watchObservedRunningTime="2026-01-23 16:55:14.399162203 +0000 UTC m=+148.399354599" Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.476983 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.477317 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:14.977300552 +0000 UTC m=+148.977492958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.511248 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hms7g" event={"ID":"be1d04b8-395a-4c61-9d3f-d6caf82a8a20","Type":"ContainerStarted","Data":"bfe9e60c1af889db8caf8237007aa352ce7a4850613624627eb24dc843efbf35"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.575554 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-krwqd" podStartSLOduration=125.575537181 podStartE2EDuration="2m5.575537181s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:14.57463661 +0000 UTC m=+148.574829016" watchObservedRunningTime="2026-01-23 16:55:14.575537181 +0000 UTC m=+148.575729587" Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.577847 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.578893 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.078876055 +0000 UTC m=+149.079068451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.682918 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" event={"ID":"559500a2-4399-42d0-832b-719429dd6fd9","Type":"ContainerStarted","Data":"6502ed4d3859f78d0d11a9d95677491906bf67dc59622bd993c57a31a2293cce"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.683169 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" event={"ID":"559500a2-4399-42d0-832b-719429dd6fd9","Type":"ContainerStarted","Data":"1e9edc9ed52ab52d579ee6bcf8df40726e16f84c29b1116ed5a096f75904125a"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.683950 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.684251 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.184239986 +0000 UTC m=+149.184432392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.709048 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:14 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:14 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:14 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.709103 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.709266 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" event={"ID":"d78dc70d-59e8-4235-b345-f523b414fb31","Type":"ContainerStarted","Data":"5dd0c77c2e148b95cd4bc43264c5522cdadac7c84c826f0899df3e8360dfe957"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.710831 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" event={"ID":"2347c841-e571-46f3-b06e-8814dea8d423","Type":"ContainerStarted","Data":"be878dc9feb1669408290ddd02aa22d652efb773f73fa2a92b06e5ca5abe5967"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.712714 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2b7xg" event={"ID":"eecab59c-60d2-4a32-955c-dbf919399422","Type":"ContainerStarted","Data":"7a9ae7a62e8901f25638723cae815ce148539db9c8b2707da9f0e9f75360adcc"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.714019 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" event={"ID":"f4bb3e14-2008-4060-ac74-cb8d90c6d63f","Type":"ContainerStarted","Data":"efe8ced4b6227c9b3b5d842d88d825294c54675c522c502bd99d17093580e374"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.714954 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" event={"ID":"eca7fead-bd4b-439e-8c7a-d630325aa36c","Type":"ContainerStarted","Data":"515b84e8d0459bfcf5e9e2b6e16684cbe9ec70a730d13a0f1254e2d8a4436587"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.715919 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" event={"ID":"f51caafd-44cb-4d91-874e-d8fe0221d01d","Type":"ContainerStarted","Data":"08982717504e6b9006c2464c348904cb9aa7ed473a0a14b88de5f03703d5b46c"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.717273 4606 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wc5zr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.717308 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" podUID="f51caafd-44cb-4d91-874e-d8fe0221d01d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.718649 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" event={"ID":"78e9dd5c-8577-4ff2-affa-519a64352ac9","Type":"ContainerStarted","Data":"b11985755d15b864270f15290a3673233737a3adc0ca7e225968dc406b93ec5c"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.718671 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" event={"ID":"78e9dd5c-8577-4ff2-affa-519a64352ac9","Type":"ContainerStarted","Data":"ca54bb035e76d96d509f0eb3a189fec7eb57e2ae9073cb2e54a47124312d1e61"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.720201 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mbmmh" event={"ID":"10da464d-1ee1-401e-8f78-dafaa4d97ea4","Type":"ContainerStarted","Data":"2a8095a166152d8c0c68b323d6e1773495e68ed7d31adf939c50de854d0eac19"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.721650 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pdwg7" event={"ID":"b5f2f224-7031-4b81-ba0c-33e9de7311f0","Type":"ContainerStarted","Data":"f7508a5f4e0dfc751fda0f366fe81d4a7c1c62e461c4bddeee49a403de442f8c"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.723778 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" event={"ID":"205e3ea9-e17d-4581-bfc0-eef3cb89dc12","Type":"ContainerStarted","Data":"cf16b7a1f7c5d1166a9d79f7b231081e215d6e7b6148cf507e3f25f549a34b2e"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.724393 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.766424 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" event={"ID":"f0f78495-0b98-497a-9a7d-c93be8decda5","Type":"ContainerStarted","Data":"1265ef55e1a8c479609ae471e425f12c3f819c41e8b256fb6c194b2c87a1f9ce"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.767357 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.788164 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" event={"ID":"28446a05-f08d-465c-ac42-782a677b3511","Type":"ContainerStarted","Data":"6a9d70470bf92f21959d31ce4abdaa3b0f506c22cba4b02e63c986cf92677913"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.793175 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.794780 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.294746582 +0000 UTC m=+149.294938988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.798941 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 16:50:13 +0000 UTC, rotation deadline is 2026-10-12 15:58:11.373683855 +0000 UTC Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.798960 4606 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6287h2m56.574726061s for next certificate rotation Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.821785 4606 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-j6ms6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.821829 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" podUID="f0f78495-0b98-497a-9a7d-c93be8decda5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.829744 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" event={"ID":"75ba3301-a42f-4a20-9434-37e2fca447c9","Type":"ContainerStarted","Data":"17465966edf93da4c305525581a0b20de40c7e56e4c051a847c3116e399ebaa9"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.849702 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" event={"ID":"fb6c50be-9bef-40c5-8c37-ec6435d574ee","Type":"ContainerStarted","Data":"2418cc8898d728f3d9bfcfdc8543707ae184e70fc95ae5fbb1bc023701799f04"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.849762 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" event={"ID":"fb6c50be-9bef-40c5-8c37-ec6435d574ee","Type":"ContainerStarted","Data":"6352a173b3adc14ab408525ffc89915d403e7cc69cc6ac9154c5a814b6e73c38"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.873096 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" event={"ID":"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac","Type":"ContainerStarted","Data":"49b21411833f170fa6c3694e598bbece8a002db80795d76e006fa6f7a34af1a4"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.874929 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" event={"ID":"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa","Type":"ContainerStarted","Data":"d60112bc63b2561fc0c115bfd94952ea9f0cf816118582baca21fba217b23a1d"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.874949 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" event={"ID":"ac74f2a9-a8bf-4787-b705-1cab5f1c56aa","Type":"ContainerStarted","Data":"a58bb5ab7281fdbd5eb319cc1d1a37548f65219a83728c8d4df2d5702a2d86cc"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.877280 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" event={"ID":"75d31efc-5fc5-4e4e-a893-93161b48002a","Type":"ContainerStarted","Data":"d52516add55a92b5301d928e7f50f19b1f4903f916e4bb1458cfb011f8b023f5"} Jan 23 16:55:14 crc kubenswrapper[4606]: I0123 16:55:14.907714 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:14 crc kubenswrapper[4606]: E0123 16:55:14.920549 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.420535395 +0000 UTC m=+149.420727801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.021171 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.021827 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.521810558 +0000 UTC m=+149.522002964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.123255 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.123577 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.623564227 +0000 UTC m=+149.623756623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.226557 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.226976 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.726945791 +0000 UTC m=+149.727138197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.328846 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.329373 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.829362283 +0000 UTC m=+149.829554689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.430178 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.430425 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.430452 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.430495 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.430514 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.431100 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:15.9310734 +0000 UTC m=+149.931265806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.434773 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.446541 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.474761 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.489457 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.531468 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.531843 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.031825585 +0000 UTC m=+150.032017991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.645237 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.645611 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.145596981 +0000 UTC m=+150.145789387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.685882 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.700007 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.702072 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.704256 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:15 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:15 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:15 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.704518 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.753381 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6lv6t" podStartSLOduration=127.753366054 podStartE2EDuration="2m7.753366054s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:14.731955813 +0000 UTC m=+148.732148219" watchObservedRunningTime="2026-01-23 16:55:15.753366054 +0000 UTC m=+149.753558460" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.786127 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.786667 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.286638472 +0000 UTC m=+150.286830878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.839490 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mbmmh" podStartSLOduration=9.839474383 podStartE2EDuration="9.839474383s" podCreationTimestamp="2026-01-23 16:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:15.838737548 +0000 UTC m=+149.838929954" watchObservedRunningTime="2026-01-23 16:55:15.839474383 +0000 UTC m=+149.839666779" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.840935 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-q8fqr" podStartSLOduration=127.840926542 podStartE2EDuration="2m7.840926542s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:15.754746381 +0000 UTC m=+149.754938787" watchObservedRunningTime="2026-01-23 16:55:15.840926542 +0000 UTC m=+149.841118948" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.885798 4606 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-psvzs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.886031 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" podUID="b470f1c1-e30e-401e-9bba-16d821bd7b4f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.921446 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:15 crc kubenswrapper[4606]: E0123 16:55:15.921721 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.421706121 +0000 UTC m=+150.421898527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.964862 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" event={"ID":"fb6c50be-9bef-40c5-8c37-ec6435d574ee","Type":"ContainerStarted","Data":"a604ac8236afd3e087640dd39688835a3f881b24a0c56dd8c4e681913bd3d910"} Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.988440 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" event={"ID":"f4bb3e14-2008-4060-ac74-cb8d90c6d63f","Type":"ContainerStarted","Data":"66d1448fb9c38c8ac01c73fbdaf994dcfb42f03b52f92cc7269e3d67e4da411b"} Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.988500 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" event={"ID":"f4bb3e14-2008-4060-ac74-cb8d90c6d63f","Type":"ContainerStarted","Data":"57d3e493fc69f72057445aa54a52cad99e9755e9cf7556f3cf68e67f5b2956b7"} Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.996814 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" event={"ID":"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac","Type":"ContainerStarted","Data":"64209d73b38128d55e8ba89e338d5eaf9878fdbe31ac3ed4fd713164d9c2ad58"} Jan 23 16:55:15 crc kubenswrapper[4606]: I0123 16:55:15.997877 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.002709 4606 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzn5r container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.002749 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.022471 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.024348 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.524334379 +0000 UTC m=+150.524526785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.029945 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" event={"ID":"eca7fead-bd4b-439e-8c7a-d630325aa36c","Type":"ContainerStarted","Data":"f83de7357af66852700ba4638cf1f082df838908cd56920987e95538a3bd6a0c"} Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.029991 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" event={"ID":"eca7fead-bd4b-439e-8c7a-d630325aa36c","Type":"ContainerStarted","Data":"1f92da04fce5b0035add584db2904d624483085e063e3b9a7d9aac572fa84094"} Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.059243 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" podStartSLOduration=127.059206031 podStartE2EDuration="2m7.059206031s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:15.979691056 +0000 UTC m=+149.979883472" watchObservedRunningTime="2026-01-23 16:55:16.059206031 +0000 UTC m=+150.059398437" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.071369 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" event={"ID":"d78dc70d-59e8-4235-b345-f523b414fb31","Type":"ContainerStarted","Data":"97abaf729ff98f14a3c6041c9e3ca55cef9eaeeef535c2e3dea10c1e8cbe71f8"} Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.101921 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hms7g" event={"ID":"be1d04b8-395a-4c61-9d3f-d6caf82a8a20","Type":"ContainerStarted","Data":"5e8c055180a6f2cd5bcb01ed2979370aee7c35e8cb387a9272882a860acbd56c"} Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.104257 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mbmmh" event={"ID":"10da464d-1ee1-401e-8f78-dafaa4d97ea4","Type":"ContainerStarted","Data":"d5735c04e4ca780d6db5c484f23693bc8b992d155b0822a3f1e99c88ff28d5b2"} Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.129992 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.130437 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.630418825 +0000 UTC m=+150.630611231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.147099 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" podStartSLOduration=128.1470776 podStartE2EDuration="2m8.1470776s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.137456634 +0000 UTC m=+150.137649040" watchObservedRunningTime="2026-01-23 16:55:16.1470776 +0000 UTC m=+150.147270026" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.149672 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" event={"ID":"28446a05-f08d-465c-ac42-782a677b3511","Type":"ContainerStarted","Data":"64a9fa6b8f75c2dbd6e5f40e2aa0623a8751093965343f320c34eb1779887351"} Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.177910 4606 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-j6ms6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.177972 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" podUID="f0f78495-0b98-497a-9a7d-c93be8decda5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.178889 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-tzrgs" podStartSLOduration=128.178875977 podStartE2EDuration="2m8.178875977s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.071124864 +0000 UTC m=+150.071317270" watchObservedRunningTime="2026-01-23 16:55:16.178875977 +0000 UTC m=+150.179068383" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.188896 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.237316 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.239295 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" podStartSLOduration=128.239272514 podStartE2EDuration="2m8.239272514s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.234060838 +0000 UTC m=+150.234253254" watchObservedRunningTime="2026-01-23 16:55:16.239272514 +0000 UTC m=+150.239464920" Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.255007 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.754989667 +0000 UTC m=+150.755182073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.323110 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jwmbk" podStartSLOduration=127.323089085 podStartE2EDuration="2m7.323089085s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.318727567 +0000 UTC m=+150.318919983" watchObservedRunningTime="2026-01-23 16:55:16.323089085 +0000 UTC m=+150.323281491" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.340833 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.341386 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.841368684 +0000 UTC m=+150.841561090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.441834 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.442117 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:16.942104859 +0000 UTC m=+150.942297255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.446634 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c9dlf" podStartSLOduration=127.446616542 podStartE2EDuration="2m7.446616542s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.374795778 +0000 UTC m=+150.374988184" watchObservedRunningTime="2026-01-23 16:55:16.446616542 +0000 UTC m=+150.446808948" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.496404 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v2w4q" podStartSLOduration=127.496297246 podStartE2EDuration="2m7.496297246s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.449052425 +0000 UTC m=+150.449244841" watchObservedRunningTime="2026-01-23 16:55:16.496297246 +0000 UTC m=+150.496489652" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.499566 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmw5q" podStartSLOduration=128.499557256 podStartE2EDuration="2m8.499557256s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.49875184 +0000 UTC m=+150.498944246" watchObservedRunningTime="2026-01-23 16:55:16.499557256 +0000 UTC m=+150.499749662" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.542668 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.542845 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.042823493 +0000 UTC m=+151.043015899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.542946 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.543279 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.043242627 +0000 UTC m=+151.043435053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.588111 4606 patch_prober.go:28] interesting pod/downloads-7954f5f757-stn97 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.588163 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-stn97" podUID="5f5bac36-b78c-4f29-b08e-419fe3010b57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.588188 4606 patch_prober.go:28] interesting pod/downloads-7954f5f757-stn97 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.588237 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-stn97" podUID="5f5bac36-b78c-4f29-b08e-419fe3010b57" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.600166 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" podStartSLOduration=127.600152636 podStartE2EDuration="2m7.600152636s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.599172763 +0000 UTC m=+150.599365169" watchObservedRunningTime="2026-01-23 16:55:16.600152636 +0000 UTC m=+150.600345042" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.644678 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.644967 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.144948375 +0000 UTC m=+151.145140781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.708474 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:16 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:16 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:16 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.708538 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.708961 4606 patch_prober.go:28] interesting pod/apiserver-76f77b778f-psdkv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]log ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]etcd ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/max-in-flight-filter ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 16:55:16 crc kubenswrapper[4606]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-startinformers ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 16:55:16 crc kubenswrapper[4606]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 16:55:16 crc kubenswrapper[4606]: livez check failed Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.709015 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" podUID="f0fb6e1a-4ea4-44ca-8e75-d3eb2ba327a9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.719938 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" podStartSLOduration=127.719915446 podStartE2EDuration="2m7.719915446s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.658717942 +0000 UTC m=+150.658910368" watchObservedRunningTime="2026-01-23 16:55:16.719915446 +0000 UTC m=+150.720107852" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.746956 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.747334 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.247322825 +0000 UTC m=+151.247515231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.755444 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ttzvm" podStartSLOduration=127.75543135 podStartE2EDuration="2m7.75543135s" podCreationTimestamp="2026-01-23 16:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.752775169 +0000 UTC m=+150.752967575" watchObservedRunningTime="2026-01-23 16:55:16.75543135 +0000 UTC m=+150.755623756" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.756614 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-qwjhc" podStartSLOduration=128.75660965 podStartE2EDuration="2m8.75660965s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:16.722466032 +0000 UTC m=+150.722658438" watchObservedRunningTime="2026-01-23 16:55:16.75660965 +0000 UTC m=+150.756802056" Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.854997 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.855108 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.355094137 +0000 UTC m=+151.355286543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.855318 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.855630 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.355622486 +0000 UTC m=+151.355814892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.956419 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:16 crc kubenswrapper[4606]: E0123 16:55:16.957273 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.457256 +0000 UTC m=+151.457448416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:16 crc kubenswrapper[4606]: I0123 16:55:16.976749 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.058952 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.059421 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.559408003 +0000 UTC m=+151.559600409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.160493 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hms7g" event={"ID":"be1d04b8-395a-4c61-9d3f-d6caf82a8a20","Type":"ContainerStarted","Data":"e843d6f5dc73df21c419d57bc526cc6279f4ac6c147c1b9daef6e1f87ec0100d"} Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.161301 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.161755 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.162094 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.662078873 +0000 UTC m=+151.662271279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.177966 4606 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzn5r container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.178009 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.282052 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.283845 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.284262 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.784248224 +0000 UTC m=+151.784440630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.365323 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hms7g" podStartSLOduration=11.365306172 podStartE2EDuration="11.365306172s" podCreationTimestamp="2026-01-23 16:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:17.275657183 +0000 UTC m=+151.275849599" watchObservedRunningTime="2026-01-23 16:55:17.365306172 +0000 UTC m=+151.365498568" Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.385611 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.387302 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.887280956 +0000 UTC m=+151.887473362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.497865 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.498338 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:17.998322121 +0000 UTC m=+151.998514527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.608123 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.608424 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.108407561 +0000 UTC m=+152.108599967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.691440 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:17 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:17 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:17 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.691502 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.709440 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.709774 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.209759067 +0000 UTC m=+152.209951473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.809893 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.810118 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.310103728 +0000 UTC m=+152.310296134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:17 crc kubenswrapper[4606]: I0123 16:55:17.910898 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:17 crc kubenswrapper[4606]: E0123 16:55:17.911321 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.411306219 +0000 UTC m=+152.411498625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.013534 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.013765 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.513750541 +0000 UTC m=+152.513942947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.167243 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.167649 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.667636247 +0000 UTC m=+152.667828653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.220406 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" event={"ID":"d78dc70d-59e8-4235-b345-f523b414fb31","Type":"ContainerStarted","Data":"6000ae8977c6016822b90d87af540ffceab1c4ba8f26aae2b502bdb8f4e5f32b"} Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.223181 4606 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzn5r container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.223254 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.268137 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.269008 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.768986532 +0000 UTC m=+152.769178938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.371649 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.372002 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.871986144 +0000 UTC m=+152.872178550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.472644 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.472805 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.97278233 +0000 UTC m=+152.972974736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.473204 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.473599 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:18.973567556 +0000 UTC m=+152.973760032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: W0123 16:55:18.477502 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-6617abdb3d7d79ec6307b8233863f7db0a5a565739e41af099d5114eeac8deb5 WatchSource:0}: Error finding container 6617abdb3d7d79ec6307b8233863f7db0a5a565739e41af099d5114eeac8deb5: Status 404 returned error can't find the container with id 6617abdb3d7d79ec6307b8233863f7db0a5a565739e41af099d5114eeac8deb5 Jan 23 16:55:18 crc kubenswrapper[4606]: W0123 16:55:18.478022 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-71e4433b8a60c2d76b10f1e7963c3c17423bcdfbd0e9d6ae37a8b24729ed979e WatchSource:0}: Error finding container 71e4433b8a60c2d76b10f1e7963c3c17423bcdfbd0e9d6ae37a8b24729ed979e: Status 404 returned error can't find the container with id 71e4433b8a60c2d76b10f1e7963c3c17423bcdfbd0e9d6ae37a8b24729ed979e Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.575133 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.575390 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.075357367 +0000 UTC m=+153.075549773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.575699 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.576034 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.076021819 +0000 UTC m=+153.076214225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.758696 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.758739 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.759730 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.760036 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.260023746 +0000 UTC m=+153.260216152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.769305 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:18 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:18 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:18 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.769356 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.775730 4606 patch_prober.go:28] interesting pod/console-f9d7485db-wlpwz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.775785 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wlpwz" podUID="767b5dce-2e44-401a-b7af-3cb97ff48f69" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 16:55:18 crc kubenswrapper[4606]: I0123 16:55:18.907725 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:18 crc kubenswrapper[4606]: E0123 16:55:18.909199 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.409186432 +0000 UTC m=+153.409378838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.008704 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.508687214 +0000 UTC m=+153.508879620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.008728 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.008992 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.009260 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.509252964 +0000 UTC m=+153.509445370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.109342 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ks7wv"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.109767 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.110020 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.610005929 +0000 UTC m=+153.610198335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.110517 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: W0123 16:55:19.154260 4606 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.154312 4606 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.157545 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gvbpj"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.159572 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.165928 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.177800 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks7wv"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.211568 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxbh\" (UniqueName: \"kubernetes.io/projected/83216fef-4300-422a-93d0-e43bbfd0126c-kube-api-access-wwxbh\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.211655 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-catalog-content\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.211711 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.211746 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-utilities\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.212034 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.712020387 +0000 UTC m=+153.712212793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.214394 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvbpj"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.285965 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"78e3a6b9296f6b3db8cd7b477745b98b8f74d565274d5e4284b9d5ef994108e8"} Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.286003 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"71e4433b8a60c2d76b10f1e7963c3c17423bcdfbd0e9d6ae37a8b24729ed979e"} Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.304089 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" event={"ID":"d78dc70d-59e8-4235-b345-f523b414fb31","Type":"ContainerStarted","Data":"f616de04c569325a444b1b9c2607b7aecdc3507528a7839cffe8ace6ed371214"} Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.312580 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.312888 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxbh\" (UniqueName: \"kubernetes.io/projected/83216fef-4300-422a-93d0-e43bbfd0126c-kube-api-access-wwxbh\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.312930 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-utilities\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.312996 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-catalog-content\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.313030 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fd4\" (UniqueName: \"kubernetes.io/projected/395d94ec-ce71-4b6a-b503-53501da7cf17-kube-api-access-h9fd4\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.313090 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-catalog-content\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.313126 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-utilities\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.313632 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-utilities\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.313785 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.813755495 +0000 UTC m=+153.813947901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.313900 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-catalog-content\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.328344 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"de64fe386008e07bbe12409ae7453e7c068cd9c733e7aa67758751f995c2e168"} Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.328405 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6617abdb3d7d79ec6307b8233863f7db0a5a565739e41af099d5114eeac8deb5"} Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.329131 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.338403 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5074a955e5a836cb92cc102deda9858229589e641e8e98c1a09fdb4edb6a36f3"} Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.338443 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ce61f5c19f3eb57262f1444be9a43bce69127502f9562c53b4a677ed82e4119f"} Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.411273 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wsc4g"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.412154 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.416020 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-utilities\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.416091 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fd4\" (UniqueName: \"kubernetes.io/projected/395d94ec-ce71-4b6a-b503-53501da7cf17-kube-api-access-h9fd4\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.416168 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.416208 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-catalog-content\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.416468 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-utilities\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.416877 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:19.9168651 +0000 UTC m=+153.917057506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.417227 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-catalog-content\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.417872 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxbh\" (UniqueName: \"kubernetes.io/projected/83216fef-4300-422a-93d0-e43bbfd0126c-kube-api-access-wwxbh\") pod \"certified-operators-ks7wv\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.443777 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsc4g"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.487288 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fd4\" (UniqueName: \"kubernetes.io/projected/395d94ec-ce71-4b6a-b503-53501da7cf17-kube-api-access-h9fd4\") pod \"community-operators-gvbpj\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.511343 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s6twp"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.512365 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.517031 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.517336 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhrc\" (UniqueName: \"kubernetes.io/projected/f9a9cfbb-677a-458b-bb08-8220e122ccac-kube-api-access-xrhrc\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.517419 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-utilities\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.517490 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-catalog-content\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.517814 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.017792251 +0000 UTC m=+154.017984657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.560822 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6twp"] Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.618780 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-utilities\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.620078 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.620250 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-catalog-content\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.620440 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhrc\" (UniqueName: \"kubernetes.io/projected/f9a9cfbb-677a-458b-bb08-8220e122ccac-kube-api-access-xrhrc\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.620446 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.120430639 +0000 UTC m=+154.120623045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.620517 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-utilities\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.620545 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-catalog-content\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.620564 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnlcp\" (UniqueName: \"kubernetes.io/projected/416c8a19-3575-4434-813b-184508a997b9-kube-api-access-rnlcp\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.621010 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-utilities\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.621052 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-catalog-content\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.687723 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.695770 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:19 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:19 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:19 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.695831 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.699398 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhrc\" (UniqueName: \"kubernetes.io/projected/f9a9cfbb-677a-458b-bb08-8220e122ccac-kube-api-access-xrhrc\") pod \"community-operators-wsc4g\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.723296 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.723479 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-utilities\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.723554 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-catalog-content\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.723612 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnlcp\" (UniqueName: \"kubernetes.io/projected/416c8a19-3575-4434-813b-184508a997b9-kube-api-access-rnlcp\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.723947 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.223930908 +0000 UTC m=+154.224123314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.724338 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-utilities\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.724496 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-catalog-content\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.749857 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.786841 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.830116 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.830392 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.330380426 +0000 UTC m=+154.330572822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.851695 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnlcp\" (UniqueName: \"kubernetes.io/projected/416c8a19-3575-4434-813b-184508a997b9-kube-api-access-rnlcp\") pod \"certified-operators-s6twp\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.932432 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.932702 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.432659702 +0000 UTC m=+154.432852108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.933203 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:19 crc kubenswrapper[4606]: E0123 16:55:19.933714 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.433702438 +0000 UTC m=+154.433894834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.959411 4606 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzn5r container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.959767 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.959472 4606 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzn5r container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.960370 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 23 16:55:19 crc kubenswrapper[4606]: I0123 16:55:19.978437 4606 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.035566 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.035760 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.535734127 +0000 UTC m=+154.535926523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.036869 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.536852515 +0000 UTC m=+154.537044931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.037042 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.138348 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.138745 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.638716517 +0000 UTC m=+154.638908923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.139138 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.139761 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.639747912 +0000 UTC m=+154.639940308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.152277 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.155716 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.157057 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.242229 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.242553 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.742525816 +0000 UTC m=+154.742718232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.242721 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.243057 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.743045173 +0000 UTC m=+154.743237579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.347095 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.347473 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.847456302 +0000 UTC m=+154.847648708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.417663 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" event={"ID":"d78dc70d-59e8-4235-b345-f523b414fb31","Type":"ContainerStarted","Data":"c1fafd0132ab6226102669360a057b083e178ea9786818b01b7c38c22d1043e3"} Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.449509 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.449817 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:20.949805082 +0000 UTC m=+154.949997488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.550339 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.551028 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 16:55:21.051014122 +0000 UTC m=+155.051206528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.651568 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:20 crc kubenswrapper[4606]: E0123 16:55:20.651943 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 16:55:21.151926132 +0000 UTC m=+155.152118608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2bspd" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.658544 4606 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T16:55:19.978463645Z","Handler":null,"Name":""} Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.692323 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:20 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:20 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:20 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.692655 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.717098 4606 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.717128 4606 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.753310 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.754271 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsc4g"] Jan 23 16:55:20 crc kubenswrapper[4606]: W0123 16:55:20.777727 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a9cfbb_677a_458b_bb08_8220e122ccac.slice/crio-72db072a074645acefe4bbfd843c5d7dee49a2f60cb19351106ab16295cc7230 WatchSource:0}: Error finding container 72db072a074645acefe4bbfd843c5d7dee49a2f60cb19351106ab16295cc7230: Status 404 returned error can't find the container with id 72db072a074645acefe4bbfd843c5d7dee49a2f60cb19351106ab16295cc7230 Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.778175 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.859745 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kcsj7"] Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.860789 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.861686 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.868900 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.877327 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcsj7"] Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.904746 4606 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.904787 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.966101 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-catalog-content\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.966151 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxbtb\" (UniqueName: \"kubernetes.io/projected/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-kube-api-access-wxbtb\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.966211 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-utilities\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:20 crc kubenswrapper[4606]: I0123 16:55:20.969952 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvbpj"] Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.067061 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-catalog-content\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.067107 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxbtb\" (UniqueName: \"kubernetes.io/projected/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-kube-api-access-wxbtb\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.067157 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-utilities\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.067563 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-utilities\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.067671 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-catalog-content\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.086803 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxbtb\" (UniqueName: \"kubernetes.io/projected/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-kube-api-access-wxbtb\") pod \"redhat-marketplace-kcsj7\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.101872 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks7wv"] Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.123253 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6twp"] Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.228907 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.252047 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5pmv8"] Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.252981 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.274941 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pmv8"] Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.376030 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-catalog-content\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.376083 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9kr\" (UniqueName: \"kubernetes.io/projected/bc241014-a654-4aa4-a5e0-5b83e54224fa-kube-api-access-zd9kr\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.376109 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-utilities\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.460693 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2bspd\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.462372 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerStarted","Data":"4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105"} Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.479978 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerStarted","Data":"72db072a074645acefe4bbfd843c5d7dee49a2f60cb19351106ab16295cc7230"} Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.480013 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks7wv" event={"ID":"83216fef-4300-422a-93d0-e43bbfd0126c","Type":"ContainerStarted","Data":"32410be3a66b1dd531f31cdc42216d7272097bdaa136dbeeea273cd2febc4101"} Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.480028 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvbpj" event={"ID":"395d94ec-ce71-4b6a-b503-53501da7cf17","Type":"ContainerStarted","Data":"45514ea185d4974cd826e316b02a49166f03396b48730c67efd3cacdb63f765d"} Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.480055 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6twp" event={"ID":"416c8a19-3575-4434-813b-184508a997b9","Type":"ContainerStarted","Data":"33cd36233d48fa9ca012639ec3257f28e295503dfd6cd56222e01421350bc62f"} Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.483108 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-catalog-content\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.483162 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9kr\" (UniqueName: \"kubernetes.io/projected/bc241014-a654-4aa4-a5e0-5b83e54224fa-kube-api-access-zd9kr\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.483190 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-utilities\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.483831 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-utilities\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.484084 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-catalog-content\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.485074 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.495796 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.500754 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.501809 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.506947 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.507418 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.538162 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9kr\" (UniqueName: \"kubernetes.io/projected/bc241014-a654-4aa4-a5e0-5b83e54224fa-kube-api-access-zd9kr\") pod \"redhat-marketplace-5pmv8\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.566836 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.594065 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9afa2c13-613e-4885-8721-169fc3458fcf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.594117 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9afa2c13-613e-4885-8721-169fc3458fcf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.623182 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" podStartSLOduration=15.623160433 podStartE2EDuration="15.623160433s" podCreationTimestamp="2026-01-23 16:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:21.602158051 +0000 UTC m=+155.602350467" watchObservedRunningTime="2026-01-23 16:55:21.623160433 +0000 UTC m=+155.623352849" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.695076 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9afa2c13-613e-4885-8721-169fc3458fcf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.695110 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9afa2c13-613e-4885-8721-169fc3458fcf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.695181 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9afa2c13-613e-4885-8721-169fc3458fcf-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.698941 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.702553 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:21 crc kubenswrapper[4606]: [-]has-synced failed: reason withheld Jan 23 16:55:21 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:21 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.702632 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.714824 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-psdkv" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.733433 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9afa2c13-613e-4885-8721-169fc3458fcf-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.818009 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcsj7"] Jan 23 16:55:21 crc kubenswrapper[4606]: W0123 16:55:21.835979 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac940a5b_420a_40f5_a0a2_fe76fb5f8400.slice/crio-7543e243e56850490ceceaa057f65eae81dac07607b95d0a4fd47b675cff4a4e WatchSource:0}: Error finding container 7543e243e56850490ceceaa057f65eae81dac07607b95d0a4fd47b675cff4a4e: Status 404 returned error can't find the container with id 7543e243e56850490ceceaa057f65eae81dac07607b95d0a4fd47b675cff4a4e Jan 23 16:55:21 crc kubenswrapper[4606]: I0123 16:55:21.955618 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.257377 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sv2zs"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.261680 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.265617 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.284313 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv2zs"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.316864 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pmv8"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.388919 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.404609 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2bspd"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.411711 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-utilities\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.411799 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq678\" (UniqueName: \"kubernetes.io/projected/2fcc3539-bb51-4397-9b20-975529f7d4ed-kube-api-access-xq678\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.411844 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-catalog-content\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.440602 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.478601 4606 generic.go:334] "Generic (PLEG): container finished" podID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerID="d6f6fe87a024f91493889ad34075f4a124631c4a7499800a15d3f3221b21a304" exitCode=0 Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.478655 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvbpj" event={"ID":"395d94ec-ce71-4b6a-b503-53501da7cf17","Type":"ContainerDied","Data":"d6f6fe87a024f91493889ad34075f4a124631c4a7499800a15d3f3221b21a304"} Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.482107 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" event={"ID":"5ec26ef9-e655-476f-ac27-f27942209b6f","Type":"ContainerStarted","Data":"eea67d66f1ad1e209b6c42ae39998a7c568936e62c91747e1396a27be04ae850"} Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.485820 4606 generic.go:334] "Generic (PLEG): container finished" podID="416c8a19-3575-4434-813b-184508a997b9" containerID="cfd35136f0b9bac1c851581037ddf07cc55349ec8bbda034d4fbe3b31ef477a3" exitCode=0 Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.485858 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6twp" event={"ID":"416c8a19-3575-4434-813b-184508a997b9","Type":"ContainerDied","Data":"cfd35136f0b9bac1c851581037ddf07cc55349ec8bbda034d4fbe3b31ef477a3"} Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.486800 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcsj7" event={"ID":"ac940a5b-420a-40f5-a0a2-fe76fb5f8400","Type":"ContainerStarted","Data":"7543e243e56850490ceceaa057f65eae81dac07607b95d0a4fd47b675cff4a4e"} Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.488100 4606 generic.go:334] "Generic (PLEG): container finished" podID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerID="4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105" exitCode=0 Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.488131 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerDied","Data":"4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105"} Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.489601 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pmv8" event={"ID":"bc241014-a654-4aa4-a5e0-5b83e54224fa","Type":"ContainerStarted","Data":"ce2822cad477c722fb69063438ff613e9e50261a8696a365f06ed0f5ebff7d27"} Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.491056 4606 generic.go:334] "Generic (PLEG): container finished" podID="83216fef-4300-422a-93d0-e43bbfd0126c" containerID="194b171b094850031560ac0c9c8b85fe8fc2db7f470f86ac942c8bac3702c8be" exitCode=0 Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.491090 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks7wv" event={"ID":"83216fef-4300-422a-93d0-e43bbfd0126c","Type":"ContainerDied","Data":"194b171b094850031560ac0c9c8b85fe8fc2db7f470f86ac942c8bac3702c8be"} Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.516370 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq678\" (UniqueName: \"kubernetes.io/projected/2fcc3539-bb51-4397-9b20-975529f7d4ed-kube-api-access-xq678\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.516480 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-catalog-content\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.516570 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-utilities\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.516977 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-catalog-content\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.517000 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-utilities\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.541271 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq678\" (UniqueName: \"kubernetes.io/projected/2fcc3539-bb51-4397-9b20-975529f7d4ed-kube-api-access-xq678\") pod \"redhat-operators-sv2zs\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.545561 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.551087 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.551922 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.556101 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.556294 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.569657 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.589633 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.667829 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qm92l"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.673763 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.675232 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qm92l"] Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.700044 4606 patch_prober.go:28] interesting pod/router-default-5444994796-vrcsb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 16:55:22 crc kubenswrapper[4606]: [+]has-synced ok Jan 23 16:55:22 crc kubenswrapper[4606]: [+]process-running ok Jan 23 16:55:22 crc kubenswrapper[4606]: healthz check failed Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.700086 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vrcsb" podUID="98245824-432e-48b3-8bd1-a8f7cc2e42d2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.733011 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.733255 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.834216 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.834473 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tbf\" (UniqueName: \"kubernetes.io/projected/b09d16f2-87a5-4163-96f6-2680cb605ce5-kube-api-access-v8tbf\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.834530 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.834547 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-utilities\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.834565 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-catalog-content\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.834877 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.855120 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.898346 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.935357 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-utilities\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.935396 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-catalog-content\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.935447 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tbf\" (UniqueName: \"kubernetes.io/projected/b09d16f2-87a5-4163-96f6-2680cb605ce5-kube-api-access-v8tbf\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.936108 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-catalog-content\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.936179 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-utilities\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:22 crc kubenswrapper[4606]: I0123 16:55:22.956577 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tbf\" (UniqueName: \"kubernetes.io/projected/b09d16f2-87a5-4163-96f6-2680cb605ce5-kube-api-access-v8tbf\") pod \"redhat-operators-qm92l\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.008476 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sv2zs"] Jan 23 16:55:23 crc kubenswrapper[4606]: W0123 16:55:23.024738 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fcc3539_bb51_4397_9b20_975529f7d4ed.slice/crio-4268118a968bd7b305fdc00d5a9ad3ccfc4882b4f1f61813215fbbae2084d15b WatchSource:0}: Error finding container 4268118a968bd7b305fdc00d5a9ad3ccfc4882b4f1f61813215fbbae2084d15b: Status 404 returned error can't find the container with id 4268118a968bd7b305fdc00d5a9ad3ccfc4882b4f1f61813215fbbae2084d15b Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.079990 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.315666 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.503698 4606 generic.go:334] "Generic (PLEG): container finished" podID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerID="26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1" exitCode=0 Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.503991 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pmv8" event={"ID":"bc241014-a654-4aa4-a5e0-5b83e54224fa","Type":"ContainerDied","Data":"26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.508551 4606 generic.go:334] "Generic (PLEG): container finished" podID="559500a2-4399-42d0-832b-719429dd6fd9" containerID="6502ed4d3859f78d0d11a9d95677491906bf67dc59622bd993c57a31a2293cce" exitCode=0 Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.508637 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" event={"ID":"559500a2-4399-42d0-832b-719429dd6fd9","Type":"ContainerDied","Data":"6502ed4d3859f78d0d11a9d95677491906bf67dc59622bd993c57a31a2293cce"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.519497 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv2zs" event={"ID":"2fcc3539-bb51-4397-9b20-975529f7d4ed","Type":"ContainerStarted","Data":"4268118a968bd7b305fdc00d5a9ad3ccfc4882b4f1f61813215fbbae2084d15b"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.545729 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qm92l"] Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.552236 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" event={"ID":"5ec26ef9-e655-476f-ac27-f27942209b6f","Type":"ContainerStarted","Data":"c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.552980 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.560975 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9afa2c13-613e-4885-8721-169fc3458fcf","Type":"ContainerStarted","Data":"98078954c508cd7f04679fb968126ba6c7bce7ef50474b855fc8faf0457497c8"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.561028 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9afa2c13-613e-4885-8721-169fc3458fcf","Type":"ContainerStarted","Data":"985686326a2747ee9d3d515d479f393475d0055587d43d1ecc14ba64c9755950"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.583180 4606 generic.go:334] "Generic (PLEG): container finished" podID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerID="32163c37fa06b8dec0e21e808158dc33442d2b767306c3a96e63183dcc71547a" exitCode=0 Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.583321 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcsj7" event={"ID":"ac940a5b-420a-40f5-a0a2-fe76fb5f8400","Type":"ContainerDied","Data":"32163c37fa06b8dec0e21e808158dc33442d2b767306c3a96e63183dcc71547a"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.585694 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f08ada3f-4022-40b3-8d07-4eb33a970ae8","Type":"ContainerStarted","Data":"23ef6641a2c1198710e6a3040f898e98b1d977951151fb668016183e87fe9251"} Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.636937 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" podStartSLOduration=135.63691948 podStartE2EDuration="2m15.63691948s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:23.607554164 +0000 UTC m=+157.607746590" watchObservedRunningTime="2026-01-23 16:55:23.63691948 +0000 UTC m=+157.637111876" Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.663281 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.663263993 podStartE2EDuration="2.663263993s" podCreationTimestamp="2026-01-23 16:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:23.640804102 +0000 UTC m=+157.640996528" watchObservedRunningTime="2026-01-23 16:55:23.663263993 +0000 UTC m=+157.663456399" Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.716894 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:23 crc kubenswrapper[4606]: I0123 16:55:23.726877 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vrcsb" Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.594663 4606 generic.go:334] "Generic (PLEG): container finished" podID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerID="1c2110b3c415004a9042c687050f15d8e2df78627f73fda53c1f914d3278b790" exitCode=0 Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.594805 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv2zs" event={"ID":"2fcc3539-bb51-4397-9b20-975529f7d4ed","Type":"ContainerDied","Data":"1c2110b3c415004a9042c687050f15d8e2df78627f73fda53c1f914d3278b790"} Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.596559 4606 generic.go:334] "Generic (PLEG): container finished" podID="9afa2c13-613e-4885-8721-169fc3458fcf" containerID="98078954c508cd7f04679fb968126ba6c7bce7ef50474b855fc8faf0457497c8" exitCode=0 Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.596643 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9afa2c13-613e-4885-8721-169fc3458fcf","Type":"ContainerDied","Data":"98078954c508cd7f04679fb968126ba6c7bce7ef50474b855fc8faf0457497c8"} Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.597620 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f08ada3f-4022-40b3-8d07-4eb33a970ae8","Type":"ContainerStarted","Data":"074e2fa9240c378476369be7f86f71286ef35158c650169152050a9645d279b8"} Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.603558 4606 generic.go:334] "Generic (PLEG): container finished" podID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerID="e69f4e3924e1a7c1a0840570e50e70f71e4acdb85ce4d63199962e8868ba47d4" exitCode=0 Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.603986 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm92l" event={"ID":"b09d16f2-87a5-4163-96f6-2680cb605ce5","Type":"ContainerDied","Data":"e69f4e3924e1a7c1a0840570e50e70f71e4acdb85ce4d63199962e8868ba47d4"} Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.604010 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm92l" event={"ID":"b09d16f2-87a5-4163-96f6-2680cb605ce5","Type":"ContainerStarted","Data":"33d390dc9c90342c6cf70648c3705193905faa0604105e0a6b241681e43de0c7"} Jan 23 16:55:24 crc kubenswrapper[4606]: I0123 16:55:24.648910 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.648896771 podStartE2EDuration="2.648896771s" podCreationTimestamp="2026-01-23 16:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:55:24.644227412 +0000 UTC m=+158.644419828" watchObservedRunningTime="2026-01-23 16:55:24.648896771 +0000 UTC m=+158.649089177" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.051220 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.229096 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/559500a2-4399-42d0-832b-719429dd6fd9-secret-volume\") pod \"559500a2-4399-42d0-832b-719429dd6fd9\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.229194 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559500a2-4399-42d0-832b-719429dd6fd9-config-volume\") pod \"559500a2-4399-42d0-832b-719429dd6fd9\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.229224 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jwvq\" (UniqueName: \"kubernetes.io/projected/559500a2-4399-42d0-832b-719429dd6fd9-kube-api-access-7jwvq\") pod \"559500a2-4399-42d0-832b-719429dd6fd9\" (UID: \"559500a2-4399-42d0-832b-719429dd6fd9\") " Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.232280 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/559500a2-4399-42d0-832b-719429dd6fd9-config-volume" (OuterVolumeSpecName: "config-volume") pod "559500a2-4399-42d0-832b-719429dd6fd9" (UID: "559500a2-4399-42d0-832b-719429dd6fd9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.277453 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559500a2-4399-42d0-832b-719429dd6fd9-kube-api-access-7jwvq" (OuterVolumeSpecName: "kube-api-access-7jwvq") pod "559500a2-4399-42d0-832b-719429dd6fd9" (UID: "559500a2-4399-42d0-832b-719429dd6fd9"). InnerVolumeSpecName "kube-api-access-7jwvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.279649 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559500a2-4399-42d0-832b-719429dd6fd9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "559500a2-4399-42d0-832b-719429dd6fd9" (UID: "559500a2-4399-42d0-832b-719429dd6fd9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.330938 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jwvq\" (UniqueName: \"kubernetes.io/projected/559500a2-4399-42d0-832b-719429dd6fd9-kube-api-access-7jwvq\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.331036 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/559500a2-4399-42d0-832b-719429dd6fd9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.331050 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559500a2-4399-42d0-832b-719429dd6fd9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.744733 4606 generic.go:334] "Generic (PLEG): container finished" podID="f08ada3f-4022-40b3-8d07-4eb33a970ae8" containerID="074e2fa9240c378476369be7f86f71286ef35158c650169152050a9645d279b8" exitCode=0 Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.744938 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f08ada3f-4022-40b3-8d07-4eb33a970ae8","Type":"ContainerDied","Data":"074e2fa9240c378476369be7f86f71286ef35158c650169152050a9645d279b8"} Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.766015 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.766343 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2" event={"ID":"559500a2-4399-42d0-832b-719429dd6fd9","Type":"ContainerDied","Data":"1e9edc9ed52ab52d579ee6bcf8df40726e16f84c29b1116ed5a096f75904125a"} Jan 23 16:55:25 crc kubenswrapper[4606]: I0123 16:55:25.766386 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e9edc9ed52ab52d579ee6bcf8df40726e16f84c29b1116ed5a096f75904125a" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.587391 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-stn97" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.588391 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.757284 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9afa2c13-613e-4885-8721-169fc3458fcf-kube-api-access\") pod \"9afa2c13-613e-4885-8721-169fc3458fcf\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.757434 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9afa2c13-613e-4885-8721-169fc3458fcf-kubelet-dir\") pod \"9afa2c13-613e-4885-8721-169fc3458fcf\" (UID: \"9afa2c13-613e-4885-8721-169fc3458fcf\") " Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.758628 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afa2c13-613e-4885-8721-169fc3458fcf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9afa2c13-613e-4885-8721-169fc3458fcf" (UID: "9afa2c13-613e-4885-8721-169fc3458fcf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.764639 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afa2c13-613e-4885-8721-169fc3458fcf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9afa2c13-613e-4885-8721-169fc3458fcf" (UID: "9afa2c13-613e-4885-8721-169fc3458fcf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.846334 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.852653 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"9afa2c13-613e-4885-8721-169fc3458fcf","Type":"ContainerDied","Data":"985686326a2747ee9d3d515d479f393475d0055587d43d1ecc14ba64c9755950"} Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.852738 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="985686326a2747ee9d3d515d479f393475d0055587d43d1ecc14ba64c9755950" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.859646 4606 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9afa2c13-613e-4885-8721-169fc3458fcf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:26 crc kubenswrapper[4606]: I0123 16:55:26.859709 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9afa2c13-613e-4885-8721-169fc3458fcf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.768001 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hms7g" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.773331 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.846026 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kubelet-dir\") pod \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.846334 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kube-api-access\") pod \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\" (UID: \"f08ada3f-4022-40b3-8d07-4eb33a970ae8\") " Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.846465 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f08ada3f-4022-40b3-8d07-4eb33a970ae8" (UID: "f08ada3f-4022-40b3-8d07-4eb33a970ae8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.846636 4606 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.921056 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f08ada3f-4022-40b3-8d07-4eb33a970ae8" (UID: "f08ada3f-4022-40b3-8d07-4eb33a970ae8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.947510 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f08ada3f-4022-40b3-8d07-4eb33a970ae8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.972278 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f08ada3f-4022-40b3-8d07-4eb33a970ae8","Type":"ContainerDied","Data":"23ef6641a2c1198710e6a3040f898e98b1d977951151fb668016183e87fe9251"} Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.972314 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23ef6641a2c1198710e6a3040f898e98b1d977951151fb668016183e87fe9251" Jan 23 16:55:27 crc kubenswrapper[4606]: I0123 16:55:27.972338 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 16:55:28 crc kubenswrapper[4606]: I0123 16:55:28.699950 4606 patch_prober.go:28] interesting pod/console-f9d7485db-wlpwz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 16:55:28 crc kubenswrapper[4606]: I0123 16:55:28.700013 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wlpwz" podUID="767b5dce-2e44-401a-b7af-3cb97ff48f69" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 16:55:29 crc kubenswrapper[4606]: I0123 16:55:29.933548 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:55:29 crc kubenswrapper[4606]: I0123 16:55:29.933638 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:55:29 crc kubenswrapper[4606]: I0123 16:55:29.961861 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:55:32 crc kubenswrapper[4606]: I0123 16:55:32.335133 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:55:32 crc kubenswrapper[4606]: I0123 16:55:32.340509 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bae240c5-8926-4fc6-b0d9-fdf64bacecfe-metrics-certs\") pod \"network-metrics-daemon-qqwhl\" (UID: \"bae240c5-8926-4fc6-b0d9-fdf64bacecfe\") " pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:55:32 crc kubenswrapper[4606]: I0123 16:55:32.379157 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qqwhl" Jan 23 16:55:32 crc kubenswrapper[4606]: I0123 16:55:32.942243 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qqwhl"] Jan 23 16:55:33 crc kubenswrapper[4606]: I0123 16:55:33.122159 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" event={"ID":"bae240c5-8926-4fc6-b0d9-fdf64bacecfe","Type":"ContainerStarted","Data":"2f3cbdae4132aa787bd2eaf6cc61007b4a4ea632e7a334b195684eee2ee4a918"} Jan 23 16:55:36 crc kubenswrapper[4606]: I0123 16:55:36.194087 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" event={"ID":"bae240c5-8926-4fc6-b0d9-fdf64bacecfe","Type":"ContainerStarted","Data":"5330fc9396037998ec7f54e7df670a18af8b8307da46d122f9b404c4f44aa94f"} Jan 23 16:55:38 crc kubenswrapper[4606]: I0123 16:55:38.706326 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:38 crc kubenswrapper[4606]: I0123 16:55:38.709793 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 16:55:41 crc kubenswrapper[4606]: I0123 16:55:41.493521 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:55:49 crc kubenswrapper[4606]: I0123 16:55:49.576274 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" Jan 23 16:55:55 crc kubenswrapper[4606]: I0123 16:55:55.707366 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 16:55:57 crc kubenswrapper[4606]: E0123 16:55:57.747756 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 16:55:57 crc kubenswrapper[4606]: E0123 16:55:57.748400 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxbtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kcsj7_openshift-marketplace(ac940a5b-420a-40f5-a0a2-fe76fb5f8400): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:55:57 crc kubenswrapper[4606]: E0123 16:55:57.749634 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kcsj7" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.820009 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 16:55:59 crc kubenswrapper[4606]: E0123 16:55:59.820735 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559500a2-4399-42d0-832b-719429dd6fd9" containerName="collect-profiles" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.820748 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="559500a2-4399-42d0-832b-719429dd6fd9" containerName="collect-profiles" Jan 23 16:55:59 crc kubenswrapper[4606]: E0123 16:55:59.820757 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afa2c13-613e-4885-8721-169fc3458fcf" containerName="pruner" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.820763 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afa2c13-613e-4885-8721-169fc3458fcf" containerName="pruner" Jan 23 16:55:59 crc kubenswrapper[4606]: E0123 16:55:59.820779 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08ada3f-4022-40b3-8d07-4eb33a970ae8" containerName="pruner" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.820788 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08ada3f-4022-40b3-8d07-4eb33a970ae8" containerName="pruner" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.820900 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08ada3f-4022-40b3-8d07-4eb33a970ae8" containerName="pruner" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.820910 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afa2c13-613e-4885-8721-169fc3458fcf" containerName="pruner" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.820918 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="559500a2-4399-42d0-832b-719429dd6fd9" containerName="collect-profiles" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.821297 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.830842 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.830913 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.833653 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.936102 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:55:59 crc kubenswrapper[4606]: I0123 16:55:59.936157 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:56:00 crc kubenswrapper[4606]: I0123 16:56:00.016193 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:00 crc kubenswrapper[4606]: I0123 16:56:00.016304 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:00 crc kubenswrapper[4606]: I0123 16:56:00.117717 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:00 crc kubenswrapper[4606]: I0123 16:56:00.118009 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:00 crc kubenswrapper[4606]: I0123 16:56:00.117825 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:00 crc kubenswrapper[4606]: I0123 16:56:00.144357 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:00 crc kubenswrapper[4606]: I0123 16:56:00.150506 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.027874 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.028803 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.031768 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.073890 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-var-lock\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.073941 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/778ea821-b161-4045-b9bc-78224119964a-kube-api-access\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.073976 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.175173 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-var-lock\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.175227 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/778ea821-b161-4045-b9bc-78224119964a-kube-api-access\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.175253 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.175323 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.175563 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-var-lock\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.201093 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/778ea821-b161-4045-b9bc-78224119964a-kube-api-access\") pod \"installer-9-crc\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:04 crc kubenswrapper[4606]: I0123 16:56:04.378106 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:05 crc kubenswrapper[4606]: E0123 16:56:05.250292 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 16:56:05 crc kubenswrapper[4606]: E0123 16:56:05.250495 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq678,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sv2zs_openshift-marketplace(2fcc3539-bb51-4397-9b20-975529f7d4ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:56:05 crc kubenswrapper[4606]: E0123 16:56:05.251692 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sv2zs" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" Jan 23 16:56:07 crc kubenswrapper[4606]: E0123 16:56:07.258793 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sv2zs" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" Jan 23 16:56:07 crc kubenswrapper[4606]: E0123 16:56:07.260627 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kcsj7" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" Jan 23 16:56:07 crc kubenswrapper[4606]: E0123 16:56:07.328679 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 16:56:07 crc kubenswrapper[4606]: E0123 16:56:07.328820 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xrhrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wsc4g_openshift-marketplace(f9a9cfbb-677a-458b-bb08-8220e122ccac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:56:07 crc kubenswrapper[4606]: E0123 16:56:07.330898 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wsc4g" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.679252 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wsc4g" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.767519 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.767689 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnlcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s6twp_openshift-marketplace(416c8a19-3575-4434-813b-184508a997b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.769186 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s6twp" podUID="416c8a19-3575-4434-813b-184508a997b9" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.775348 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.775424 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwxbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ks7wv_openshift-marketplace(83216fef-4300-422a-93d0-e43bbfd0126c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.776640 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ks7wv" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.816675 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.817173 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h9fd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gvbpj_openshift-marketplace(395d94ec-ce71-4b6a-b503-53501da7cf17): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.818778 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gvbpj" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.878617 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.878770 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8tbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qm92l_openshift-marketplace(b09d16f2-87a5-4163-96f6-2680cb605ce5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 16:56:10 crc kubenswrapper[4606]: E0123 16:56:10.881128 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qm92l" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.007168 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.159211 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.465111 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"778ea821-b161-4045-b9bc-78224119964a","Type":"ContainerStarted","Data":"25b93b2ac515b31a68af9110421850c0d1df8c20d1e0ecb12c36bf52a7d2a5d1"} Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.465156 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"778ea821-b161-4045-b9bc-78224119964a","Type":"ContainerStarted","Data":"c2de7cfc822634a22646302036afff9f2a25b00a51919b284c52676f46b1e942"} Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.467606 4606 generic.go:334] "Generic (PLEG): container finished" podID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerID="7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab" exitCode=0 Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.467653 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pmv8" event={"ID":"bc241014-a654-4aa4-a5e0-5b83e54224fa","Type":"ContainerDied","Data":"7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab"} Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.472040 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qqwhl" event={"ID":"bae240c5-8926-4fc6-b0d9-fdf64bacecfe","Type":"ContainerStarted","Data":"273cc459864f5ee50de1db826bfbb29f6d5d38454f0d67b321e7263a48c2bf6d"} Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.474829 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a50c2ed1-1904-4832-84c8-7008e4d85fb7","Type":"ContainerStarted","Data":"b8be0c3a03a63af845a05e1966a9959b475645a28f98670883f8fb2d991adad3"} Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.474857 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a50c2ed1-1904-4832-84c8-7008e4d85fb7","Type":"ContainerStarted","Data":"26836a5c03356b03c7f6d6df57b07d3d59bc83531583477375d1d662a8bf7f64"} Jan 23 16:56:11 crc kubenswrapper[4606]: E0123 16:56:11.475928 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ks7wv" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" Jan 23 16:56:11 crc kubenswrapper[4606]: E0123 16:56:11.475954 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s6twp" podUID="416c8a19-3575-4434-813b-184508a997b9" Jan 23 16:56:11 crc kubenswrapper[4606]: E0123 16:56:11.476216 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gvbpj" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" Jan 23 16:56:11 crc kubenswrapper[4606]: E0123 16:56:11.477161 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qm92l" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.496940 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.496924352 podStartE2EDuration="7.496924352s" podCreationTimestamp="2026-01-23 16:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:56:11.482352949 +0000 UTC m=+205.482545365" watchObservedRunningTime="2026-01-23 16:56:11.496924352 +0000 UTC m=+205.497116758" Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.508941 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.508918821 podStartE2EDuration="12.508918821s" podCreationTimestamp="2026-01-23 16:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:56:11.495385828 +0000 UTC m=+205.495578254" watchObservedRunningTime="2026-01-23 16:56:11.508918821 +0000 UTC m=+205.509111227" Jan 23 16:56:11 crc kubenswrapper[4606]: I0123 16:56:11.511842 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qqwhl" podStartSLOduration=183.511829674 podStartE2EDuration="3m3.511829674s" podCreationTimestamp="2026-01-23 16:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:56:11.508105268 +0000 UTC m=+205.508297674" watchObservedRunningTime="2026-01-23 16:56:11.511829674 +0000 UTC m=+205.512022080" Jan 23 16:56:12 crc kubenswrapper[4606]: I0123 16:56:12.486358 4606 generic.go:334] "Generic (PLEG): container finished" podID="a50c2ed1-1904-4832-84c8-7008e4d85fb7" containerID="b8be0c3a03a63af845a05e1966a9959b475645a28f98670883f8fb2d991adad3" exitCode=0 Jan 23 16:56:12 crc kubenswrapper[4606]: I0123 16:56:12.486670 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a50c2ed1-1904-4832-84c8-7008e4d85fb7","Type":"ContainerDied","Data":"b8be0c3a03a63af845a05e1966a9959b475645a28f98670883f8fb2d991adad3"} Jan 23 16:56:12 crc kubenswrapper[4606]: I0123 16:56:12.492149 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pmv8" event={"ID":"bc241014-a654-4aa4-a5e0-5b83e54224fa","Type":"ContainerStarted","Data":"e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8"} Jan 23 16:56:12 crc kubenswrapper[4606]: I0123 16:56:12.526914 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5pmv8" podStartSLOduration=3.078386052 podStartE2EDuration="51.526881248s" podCreationTimestamp="2026-01-23 16:55:21 +0000 UTC" firstStartedPulling="2026-01-23 16:55:23.506160707 +0000 UTC m=+157.506353113" lastFinishedPulling="2026-01-23 16:56:11.954655903 +0000 UTC m=+205.954848309" observedRunningTime="2026-01-23 16:56:12.526114786 +0000 UTC m=+206.526307192" watchObservedRunningTime="2026-01-23 16:56:12.526881248 +0000 UTC m=+206.527073654" Jan 23 16:56:13 crc kubenswrapper[4606]: I0123 16:56:13.759641 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:13 crc kubenswrapper[4606]: I0123 16:56:13.822658 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kube-api-access\") pod \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " Jan 23 16:56:13 crc kubenswrapper[4606]: I0123 16:56:13.822697 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kubelet-dir\") pod \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\" (UID: \"a50c2ed1-1904-4832-84c8-7008e4d85fb7\") " Jan 23 16:56:13 crc kubenswrapper[4606]: I0123 16:56:13.822995 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a50c2ed1-1904-4832-84c8-7008e4d85fb7" (UID: "a50c2ed1-1904-4832-84c8-7008e4d85fb7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:56:13 crc kubenswrapper[4606]: I0123 16:56:13.828146 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a50c2ed1-1904-4832-84c8-7008e4d85fb7" (UID: "a50c2ed1-1904-4832-84c8-7008e4d85fb7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:13 crc kubenswrapper[4606]: I0123 16:56:13.924459 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:13 crc kubenswrapper[4606]: I0123 16:56:13.924496 4606 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a50c2ed1-1904-4832-84c8-7008e4d85fb7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:14 crc kubenswrapper[4606]: I0123 16:56:14.504399 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a50c2ed1-1904-4832-84c8-7008e4d85fb7","Type":"ContainerDied","Data":"26836a5c03356b03c7f6d6df57b07d3d59bc83531583477375d1d662a8bf7f64"} Jan 23 16:56:14 crc kubenswrapper[4606]: I0123 16:56:14.504438 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26836a5c03356b03c7f6d6df57b07d3d59bc83531583477375d1d662a8bf7f64" Jan 23 16:56:14 crc kubenswrapper[4606]: I0123 16:56:14.504525 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 16:56:21 crc kubenswrapper[4606]: I0123 16:56:21.568440 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:56:21 crc kubenswrapper[4606]: I0123 16:56:21.569221 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:56:21 crc kubenswrapper[4606]: I0123 16:56:21.823245 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:56:22 crc kubenswrapper[4606]: I0123 16:56:22.588453 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:56:23 crc kubenswrapper[4606]: I0123 16:56:23.557347 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv2zs" event={"ID":"2fcc3539-bb51-4397-9b20-975529f7d4ed","Type":"ContainerStarted","Data":"0ba871fec8bf8e088fbeb71d3d0010df8ab8bd4f96bf5ccf8cbfeb96695f6768"} Jan 23 16:56:23 crc kubenswrapper[4606]: I0123 16:56:23.567155 4606 generic.go:334] "Generic (PLEG): container finished" podID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerID="58edb5f5e3b0aca600fe129481ceaf6d0321c9e9155334ad4950f25a89611a46" exitCode=0 Jan 23 16:56:23 crc kubenswrapper[4606]: I0123 16:56:23.567256 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvbpj" event={"ID":"395d94ec-ce71-4b6a-b503-53501da7cf17","Type":"ContainerDied","Data":"58edb5f5e3b0aca600fe129481ceaf6d0321c9e9155334ad4950f25a89611a46"} Jan 23 16:56:23 crc kubenswrapper[4606]: I0123 16:56:23.571217 4606 generic.go:334] "Generic (PLEG): container finished" podID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerID="2f77163a46cb2488386b5ddac49aeeb0cb06815effcceff65489c905658e5a58" exitCode=0 Jan 23 16:56:23 crc kubenswrapper[4606]: I0123 16:56:23.572127 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcsj7" event={"ID":"ac940a5b-420a-40f5-a0a2-fe76fb5f8400","Type":"ContainerDied","Data":"2f77163a46cb2488386b5ddac49aeeb0cb06815effcceff65489c905658e5a58"} Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.578097 4606 generic.go:334] "Generic (PLEG): container finished" podID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerID="0ba871fec8bf8e088fbeb71d3d0010df8ab8bd4f96bf5ccf8cbfeb96695f6768" exitCode=0 Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.578302 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv2zs" event={"ID":"2fcc3539-bb51-4397-9b20-975529f7d4ed","Type":"ContainerDied","Data":"0ba871fec8bf8e088fbeb71d3d0010df8ab8bd4f96bf5ccf8cbfeb96695f6768"} Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.600187 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks7wv" event={"ID":"83216fef-4300-422a-93d0-e43bbfd0126c","Type":"ContainerStarted","Data":"081cb0f890396a917f4c73201fc59e07c065582ece1c64a53700a225205ad720"} Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.602666 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvbpj" event={"ID":"395d94ec-ce71-4b6a-b503-53501da7cf17","Type":"ContainerStarted","Data":"b1aa2f44cf516ee866c3a0f0aad58c4cbae06f72a6bc092ac17de82f4f382ba2"} Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.616442 4606 generic.go:334] "Generic (PLEG): container finished" podID="416c8a19-3575-4434-813b-184508a997b9" containerID="44a36a7a75e5c9de02225d429cb37e6603b698053c9e4fa167402a0e7b05fe96" exitCode=0 Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.616510 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6twp" event={"ID":"416c8a19-3575-4434-813b-184508a997b9","Type":"ContainerDied","Data":"44a36a7a75e5c9de02225d429cb37e6603b698053c9e4fa167402a0e7b05fe96"} Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.623538 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcsj7" event={"ID":"ac940a5b-420a-40f5-a0a2-fe76fb5f8400","Type":"ContainerStarted","Data":"8f99252f600ebd373690208d3d8db5098974fc8dc2407a68be585c75b8abfec0"} Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.638574 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gvbpj" podStartSLOduration=4.219773689 podStartE2EDuration="1m5.63856015s" podCreationTimestamp="2026-01-23 16:55:19 +0000 UTC" firstStartedPulling="2026-01-23 16:55:22.54566239 +0000 UTC m=+156.545854806" lastFinishedPulling="2026-01-23 16:56:23.964448871 +0000 UTC m=+217.964641267" observedRunningTime="2026-01-23 16:56:24.637405288 +0000 UTC m=+218.637597694" watchObservedRunningTime="2026-01-23 16:56:24.63856015 +0000 UTC m=+218.638752546" Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.706992 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kcsj7" podStartSLOduration=4.190265355 podStartE2EDuration="1m4.706973258s" podCreationTimestamp="2026-01-23 16:55:20 +0000 UTC" firstStartedPulling="2026-01-23 16:55:23.584213543 +0000 UTC m=+157.584405949" lastFinishedPulling="2026-01-23 16:56:24.100921456 +0000 UTC m=+218.101113852" observedRunningTime="2026-01-23 16:56:24.705473985 +0000 UTC m=+218.705666391" watchObservedRunningTime="2026-01-23 16:56:24.706973258 +0000 UTC m=+218.707165664" Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.813148 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pmv8"] Jan 23 16:56:24 crc kubenswrapper[4606]: I0123 16:56:24.813737 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5pmv8" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="registry-server" containerID="cri-o://e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8" gracePeriod=2 Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.269644 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.301686 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-utilities\") pod \"bc241014-a654-4aa4-a5e0-5b83e54224fa\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.301730 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd9kr\" (UniqueName: \"kubernetes.io/projected/bc241014-a654-4aa4-a5e0-5b83e54224fa-kube-api-access-zd9kr\") pod \"bc241014-a654-4aa4-a5e0-5b83e54224fa\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.301869 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-catalog-content\") pod \"bc241014-a654-4aa4-a5e0-5b83e54224fa\" (UID: \"bc241014-a654-4aa4-a5e0-5b83e54224fa\") " Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.303248 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-utilities" (OuterVolumeSpecName: "utilities") pod "bc241014-a654-4aa4-a5e0-5b83e54224fa" (UID: "bc241014-a654-4aa4-a5e0-5b83e54224fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.308735 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc241014-a654-4aa4-a5e0-5b83e54224fa-kube-api-access-zd9kr" (OuterVolumeSpecName: "kube-api-access-zd9kr") pod "bc241014-a654-4aa4-a5e0-5b83e54224fa" (UID: "bc241014-a654-4aa4-a5e0-5b83e54224fa"). InnerVolumeSpecName "kube-api-access-zd9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.354162 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc241014-a654-4aa4-a5e0-5b83e54224fa" (UID: "bc241014-a654-4aa4-a5e0-5b83e54224fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.403548 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.403579 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc241014-a654-4aa4-a5e0-5b83e54224fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.403605 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd9kr\" (UniqueName: \"kubernetes.io/projected/bc241014-a654-4aa4-a5e0-5b83e54224fa-kube-api-access-zd9kr\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.630869 4606 generic.go:334] "Generic (PLEG): container finished" podID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerID="e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8" exitCode=0 Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.631001 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pmv8" event={"ID":"bc241014-a654-4aa4-a5e0-5b83e54224fa","Type":"ContainerDied","Data":"e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8"} Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.631287 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pmv8" event={"ID":"bc241014-a654-4aa4-a5e0-5b83e54224fa","Type":"ContainerDied","Data":"ce2822cad477c722fb69063438ff613e9e50261a8696a365f06ed0f5ebff7d27"} Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.631316 4606 scope.go:117] "RemoveContainer" containerID="e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.631099 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pmv8" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.635664 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm92l" event={"ID":"b09d16f2-87a5-4163-96f6-2680cb605ce5","Type":"ContainerStarted","Data":"3ef130053f4cde549fe1752dcb95dcbcc3760f53c4ba7e735e85f79f07d5b8b8"} Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.644637 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv2zs" event={"ID":"2fcc3539-bb51-4397-9b20-975529f7d4ed","Type":"ContainerStarted","Data":"a2cdf41df445e1eac6b04bd7080fe69c7bbdc9059988b6aa6b1827e80d0d6152"} Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.650043 4606 scope.go:117] "RemoveContainer" containerID="7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.653196 4606 generic.go:334] "Generic (PLEG): container finished" podID="83216fef-4300-422a-93d0-e43bbfd0126c" containerID="081cb0f890396a917f4c73201fc59e07c065582ece1c64a53700a225205ad720" exitCode=0 Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.653296 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks7wv" event={"ID":"83216fef-4300-422a-93d0-e43bbfd0126c","Type":"ContainerDied","Data":"081cb0f890396a917f4c73201fc59e07c065582ece1c64a53700a225205ad720"} Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.660716 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6twp" event={"ID":"416c8a19-3575-4434-813b-184508a997b9","Type":"ContainerStarted","Data":"daa224e0225ab995edd30afc487587663120827bb586936f295a156c04afc112"} Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.682083 4606 scope.go:117] "RemoveContainer" containerID="26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.701871 4606 scope.go:117] "RemoveContainer" containerID="e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8" Jan 23 16:56:25 crc kubenswrapper[4606]: E0123 16:56:25.704435 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8\": container with ID starting with e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8 not found: ID does not exist" containerID="e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.704477 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8"} err="failed to get container status \"e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8\": rpc error: code = NotFound desc = could not find container \"e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8\": container with ID starting with e79a20482e14faa501a78d351c5c11ac761e5ef0b970a0cfc190e7abc29763b8 not found: ID does not exist" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.704522 4606 scope.go:117] "RemoveContainer" containerID="7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab" Jan 23 16:56:25 crc kubenswrapper[4606]: E0123 16:56:25.704852 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab\": container with ID starting with 7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab not found: ID does not exist" containerID="7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.704906 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab"} err="failed to get container status \"7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab\": rpc error: code = NotFound desc = could not find container \"7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab\": container with ID starting with 7f05e711f99c42231ae0fff60e2286b12de96097cc40a6bc81a77179ce7e91ab not found: ID does not exist" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.704940 4606 scope.go:117] "RemoveContainer" containerID="26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1" Jan 23 16:56:25 crc kubenswrapper[4606]: E0123 16:56:25.705913 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1\": container with ID starting with 26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1 not found: ID does not exist" containerID="26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.705938 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1"} err="failed to get container status \"26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1\": rpc error: code = NotFound desc = could not find container \"26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1\": container with ID starting with 26e7771411cbf4fb06d2f9c64d9709e34c1d1856bca14b225d783e6b9de5b8f1 not found: ID does not exist" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.715260 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s6twp" podStartSLOduration=4.002834675 podStartE2EDuration="1m6.715242429s" podCreationTimestamp="2026-01-23 16:55:19 +0000 UTC" firstStartedPulling="2026-01-23 16:55:22.54533573 +0000 UTC m=+156.545528136" lastFinishedPulling="2026-01-23 16:56:25.257743494 +0000 UTC m=+219.257935890" observedRunningTime="2026-01-23 16:56:25.712570964 +0000 UTC m=+219.712763370" watchObservedRunningTime="2026-01-23 16:56:25.715242429 +0000 UTC m=+219.715434835" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.755493 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sv2zs" podStartSLOduration=3.243850528 podStartE2EDuration="1m3.755472409s" podCreationTimestamp="2026-01-23 16:55:22 +0000 UTC" firstStartedPulling="2026-01-23 16:55:24.600378106 +0000 UTC m=+158.600570512" lastFinishedPulling="2026-01-23 16:56:25.111999987 +0000 UTC m=+219.112192393" observedRunningTime="2026-01-23 16:56:25.750288662 +0000 UTC m=+219.750481068" watchObservedRunningTime="2026-01-23 16:56:25.755472409 +0000 UTC m=+219.755664815" Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.772035 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pmv8"] Jan 23 16:56:25 crc kubenswrapper[4606]: I0123 16:56:25.786288 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pmv8"] Jan 23 16:56:26 crc kubenswrapper[4606]: I0123 16:56:26.373729 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" path="/var/lib/kubelet/pods/bc241014-a654-4aa4-a5e0-5b83e54224fa/volumes" Jan 23 16:56:27 crc kubenswrapper[4606]: I0123 16:56:27.673914 4606 generic.go:334] "Generic (PLEG): container finished" podID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerID="3ef130053f4cde549fe1752dcb95dcbcc3760f53c4ba7e735e85f79f07d5b8b8" exitCode=0 Jan 23 16:56:27 crc kubenswrapper[4606]: I0123 16:56:27.673963 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm92l" event={"ID":"b09d16f2-87a5-4163-96f6-2680cb605ce5","Type":"ContainerDied","Data":"3ef130053f4cde549fe1752dcb95dcbcc3760f53c4ba7e735e85f79f07d5b8b8"} Jan 23 16:56:28 crc kubenswrapper[4606]: I0123 16:56:28.680107 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks7wv" event={"ID":"83216fef-4300-422a-93d0-e43bbfd0126c","Type":"ContainerStarted","Data":"453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817"} Jan 23 16:56:28 crc kubenswrapper[4606]: I0123 16:56:28.694939 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ks7wv" podStartSLOduration=4.293699572 podStartE2EDuration="1m9.694919007s" podCreationTimestamp="2026-01-23 16:55:19 +0000 UTC" firstStartedPulling="2026-01-23 16:55:22.54534999 +0000 UTC m=+156.545542406" lastFinishedPulling="2026-01-23 16:56:27.946569435 +0000 UTC m=+221.946761841" observedRunningTime="2026-01-23 16:56:28.694782803 +0000 UTC m=+222.694975209" watchObservedRunningTime="2026-01-23 16:56:28.694919007 +0000 UTC m=+222.695111413" Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.789903 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.790552 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.832207 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.933925 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.934000 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.934048 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.934907 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:56:29 crc kubenswrapper[4606]: I0123 16:56:29.934983 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03" gracePeriod=600 Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.156792 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.157062 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.157574 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.157619 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.193610 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.210805 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.728832 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:56:30 crc kubenswrapper[4606]: I0123 16:56:30.732263 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.215346 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6twp"] Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.229657 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.229719 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.272200 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.696173 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03" exitCode=0 Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.696240 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03"} Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.698189 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm92l" event={"ID":"b09d16f2-87a5-4163-96f6-2680cb605ce5","Type":"ContainerStarted","Data":"54c51cd5395d8f3216fb883edf18205b952190759b456e3269bbce7b39827dd3"} Jan 23 16:56:31 crc kubenswrapper[4606]: I0123 16:56:31.735538 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:56:32 crc kubenswrapper[4606]: I0123 16:56:32.590707 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:56:32 crc kubenswrapper[4606]: I0123 16:56:32.590750 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:56:32 crc kubenswrapper[4606]: I0123 16:56:32.630225 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:56:32 crc kubenswrapper[4606]: I0123 16:56:32.702894 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s6twp" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="registry-server" containerID="cri-o://daa224e0225ab995edd30afc487587663120827bb586936f295a156c04afc112" gracePeriod=2 Jan 23 16:56:32 crc kubenswrapper[4606]: I0123 16:56:32.743781 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:56:32 crc kubenswrapper[4606]: I0123 16:56:32.761375 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qm92l" podStartSLOduration=5.41279528 podStartE2EDuration="1m10.761358058s" podCreationTimestamp="2026-01-23 16:55:22 +0000 UTC" firstStartedPulling="2026-01-23 16:55:24.604650771 +0000 UTC m=+158.604843177" lastFinishedPulling="2026-01-23 16:56:29.953213549 +0000 UTC m=+223.953405955" observedRunningTime="2026-01-23 16:56:32.729126115 +0000 UTC m=+226.729318521" watchObservedRunningTime="2026-01-23 16:56:32.761358058 +0000 UTC m=+226.761550474" Jan 23 16:56:33 crc kubenswrapper[4606]: I0123 16:56:33.080964 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:56:33 crc kubenswrapper[4606]: I0123 16:56:33.081027 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:56:33 crc kubenswrapper[4606]: I0123 16:56:33.712682 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"8cf4b07e2ad3950b42f8b972aeb9bf7966a5e6fa914bd47d93edb65756e33cc8"} Jan 23 16:56:33 crc kubenswrapper[4606]: I0123 16:56:33.719818 4606 generic.go:334] "Generic (PLEG): container finished" podID="416c8a19-3575-4434-813b-184508a997b9" containerID="daa224e0225ab995edd30afc487587663120827bb586936f295a156c04afc112" exitCode=0 Jan 23 16:56:33 crc kubenswrapper[4606]: I0123 16:56:33.719910 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6twp" event={"ID":"416c8a19-3575-4434-813b-184508a997b9","Type":"ContainerDied","Data":"daa224e0225ab995edd30afc487587663120827bb586936f295a156c04afc112"} Jan 23 16:56:33 crc kubenswrapper[4606]: I0123 16:56:33.724107 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerStarted","Data":"df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb"} Jan 23 16:56:33 crc kubenswrapper[4606]: I0123 16:56:33.946672 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.042255 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-utilities\") pod \"416c8a19-3575-4434-813b-184508a997b9\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.042363 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnlcp\" (UniqueName: \"kubernetes.io/projected/416c8a19-3575-4434-813b-184508a997b9-kube-api-access-rnlcp\") pod \"416c8a19-3575-4434-813b-184508a997b9\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.042450 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-catalog-content\") pod \"416c8a19-3575-4434-813b-184508a997b9\" (UID: \"416c8a19-3575-4434-813b-184508a997b9\") " Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.043127 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-utilities" (OuterVolumeSpecName: "utilities") pod "416c8a19-3575-4434-813b-184508a997b9" (UID: "416c8a19-3575-4434-813b-184508a997b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.048755 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416c8a19-3575-4434-813b-184508a997b9-kube-api-access-rnlcp" (OuterVolumeSpecName: "kube-api-access-rnlcp") pod "416c8a19-3575-4434-813b-184508a997b9" (UID: "416c8a19-3575-4434-813b-184508a997b9"). InnerVolumeSpecName "kube-api-access-rnlcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.085765 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "416c8a19-3575-4434-813b-184508a997b9" (UID: "416c8a19-3575-4434-813b-184508a997b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.116341 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qm92l" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="registry-server" probeResult="failure" output=< Jan 23 16:56:34 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 16:56:34 crc kubenswrapper[4606]: > Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.143621 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.143659 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnlcp\" (UniqueName: \"kubernetes.io/projected/416c8a19-3575-4434-813b-184508a997b9-kube-api-access-rnlcp\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.143685 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416c8a19-3575-4434-813b-184508a997b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.744450 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks7wv"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.745771 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6twp" event={"ID":"416c8a19-3575-4434-813b-184508a997b9","Type":"ContainerDied","Data":"33cd36233d48fa9ca012639ec3257f28e295503dfd6cd56222e01421350bc62f"} Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.745331 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6twp" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.745923 4606 scope.go:117] "RemoveContainer" containerID="daa224e0225ab995edd30afc487587663120827bb586936f295a156c04afc112" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.746621 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ks7wv" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="registry-server" containerID="cri-o://453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817" gracePeriod=30 Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.748090 4606 generic.go:334] "Generic (PLEG): container finished" podID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerID="df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb" exitCode=0 Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.748441 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerDied","Data":"df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb"} Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.756814 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.762419 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.763186 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvbpj"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.763372 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gvbpj" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="registry-server" containerID="cri-o://b1aa2f44cf516ee866c3a0f0aad58c4cbae06f72a6bc092ac17de82f4f382ba2" gracePeriod=30 Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.775221 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.775284 4606 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-ks7wv" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="registry-server" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.778819 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsc4g"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.778966 4606 scope.go:117] "RemoveContainer" containerID="44a36a7a75e5c9de02225d429cb37e6603b698053c9e4fa167402a0e7b05fe96" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.785823 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzn5r"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.786108 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" containerID="cri-o://64209d73b38128d55e8ba89e338d5eaf9878fdbe31ac3ed4fd713164d9c2ad58" gracePeriod=30 Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.803261 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcsj7"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.803485 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kcsj7" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="registry-server" containerID="cri-o://8f99252f600ebd373690208d3d8db5098974fc8dc2407a68be585c75b8abfec0" gracePeriod=30 Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.807350 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cdqb6"] Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.807574 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="extract-utilities" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.815877 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="extract-utilities" Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.815920 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="extract-utilities" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.815927 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="extract-utilities" Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.815944 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="extract-content" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.815950 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="extract-content" Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.815958 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="extract-content" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.815964 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="extract-content" Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.815973 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="registry-server" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.815984 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="registry-server" Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.815995 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50c2ed1-1904-4832-84c8-7008e4d85fb7" containerName="pruner" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.816000 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50c2ed1-1904-4832-84c8-7008e4d85fb7" containerName="pruner" Jan 23 16:56:34 crc kubenswrapper[4606]: E0123 16:56:34.816015 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="registry-server" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.816021 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="registry-server" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.816179 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="416c8a19-3575-4434-813b-184508a997b9" containerName="registry-server" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.816194 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50c2ed1-1904-4832-84c8-7008e4d85fb7" containerName="pruner" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.816203 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc241014-a654-4aa4-a5e0-5b83e54224fa" containerName="registry-server" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.814379 4606 scope.go:117] "RemoveContainer" containerID="cfd35136f0b9bac1c851581037ddf07cc55349ec8bbda034d4fbe3b31ef477a3" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.816720 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.839918 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qm92l"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.840188 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qm92l" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="registry-server" containerID="cri-o://54c51cd5395d8f3216fb883edf18205b952190759b456e3269bbce7b39827dd3" gracePeriod=30 Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.846217 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv2zs"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.850016 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sv2zs" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="registry-server" containerID="cri-o://a2cdf41df445e1eac6b04bd7080fe69c7bbdc9059988b6aa6b1827e80d0d6152" gracePeriod=30 Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.869734 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cdqb6"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.885947 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6twp"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.893012 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s6twp"] Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.894573 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9110ffb1-da7c-4a1b-95b4-28f757be8ace-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.894689 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9110ffb1-da7c-4a1b-95b4-28f757be8ace-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.894720 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xllc2\" (UniqueName: \"kubernetes.io/projected/9110ffb1-da7c-4a1b-95b4-28f757be8ace-kube-api-access-xllc2\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.996390 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9110ffb1-da7c-4a1b-95b4-28f757be8ace-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.996445 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xllc2\" (UniqueName: \"kubernetes.io/projected/9110ffb1-da7c-4a1b-95b4-28f757be8ace-kube-api-access-xllc2\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.996528 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9110ffb1-da7c-4a1b-95b4-28f757be8ace-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:34 crc kubenswrapper[4606]: I0123 16:56:34.997892 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9110ffb1-da7c-4a1b-95b4-28f757be8ace-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.005322 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9110ffb1-da7c-4a1b-95b4-28f757be8ace-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.015746 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xllc2\" (UniqueName: \"kubernetes.io/projected/9110ffb1-da7c-4a1b-95b4-28f757be8ace-kube-api-access-xllc2\") pod \"marketplace-operator-79b997595-cdqb6\" (UID: \"9110ffb1-da7c-4a1b-95b4-28f757be8ace\") " pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.169805 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.492968 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vfv9l"] Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.620610 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cdqb6"] Jan 23 16:56:35 crc kubenswrapper[4606]: W0123 16:56:35.723602 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9110ffb1_da7c_4a1b_95b4_28f757be8ace.slice/crio-5d0f67e3ff8a0ef6675cf5f5430fb59705724a0f89045c91c1f2ea79a798a482 WatchSource:0}: Error finding container 5d0f67e3ff8a0ef6675cf5f5430fb59705724a0f89045c91c1f2ea79a798a482: Status 404 returned error can't find the container with id 5d0f67e3ff8a0ef6675cf5f5430fb59705724a0f89045c91c1f2ea79a798a482 Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.766086 4606 generic.go:334] "Generic (PLEG): container finished" podID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerID="a2cdf41df445e1eac6b04bd7080fe69c7bbdc9059988b6aa6b1827e80d0d6152" exitCode=0 Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.766134 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv2zs" event={"ID":"2fcc3539-bb51-4397-9b20-975529f7d4ed","Type":"ContainerDied","Data":"a2cdf41df445e1eac6b04bd7080fe69c7bbdc9059988b6aa6b1827e80d0d6152"} Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.769394 4606 generic.go:334] "Generic (PLEG): container finished" podID="83216fef-4300-422a-93d0-e43bbfd0126c" containerID="453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817" exitCode=0 Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.769456 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks7wv" event={"ID":"83216fef-4300-422a-93d0-e43bbfd0126c","Type":"ContainerDied","Data":"453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817"} Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.778502 4606 generic.go:334] "Generic (PLEG): container finished" podID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerID="b1aa2f44cf516ee866c3a0f0aad58c4cbae06f72a6bc092ac17de82f4f382ba2" exitCode=0 Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.778566 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvbpj" event={"ID":"395d94ec-ce71-4b6a-b503-53501da7cf17","Type":"ContainerDied","Data":"b1aa2f44cf516ee866c3a0f0aad58c4cbae06f72a6bc092ac17de82f4f382ba2"} Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.782501 4606 generic.go:334] "Generic (PLEG): container finished" podID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerID="8f99252f600ebd373690208d3d8db5098974fc8dc2407a68be585c75b8abfec0" exitCode=0 Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.782547 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcsj7" event={"ID":"ac940a5b-420a-40f5-a0a2-fe76fb5f8400","Type":"ContainerDied","Data":"8f99252f600ebd373690208d3d8db5098974fc8dc2407a68be585c75b8abfec0"} Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.784274 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" event={"ID":"9110ffb1-da7c-4a1b-95b4-28f757be8ace","Type":"ContainerStarted","Data":"5d0f67e3ff8a0ef6675cf5f5430fb59705724a0f89045c91c1f2ea79a798a482"} Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.786724 4606 generic.go:334] "Generic (PLEG): container finished" podID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerID="64209d73b38128d55e8ba89e338d5eaf9878fdbe31ac3ed4fd713164d9c2ad58" exitCode=0 Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.786773 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" event={"ID":"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac","Type":"ContainerDied","Data":"64209d73b38128d55e8ba89e338d5eaf9878fdbe31ac3ed4fd713164d9c2ad58"} Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.789517 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qm92l_b09d16f2-87a5-4163-96f6-2680cb605ce5/registry-server/0.log" Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.790431 4606 generic.go:334] "Generic (PLEG): container finished" podID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerID="54c51cd5395d8f3216fb883edf18205b952190759b456e3269bbce7b39827dd3" exitCode=1 Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.790459 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm92l" event={"ID":"b09d16f2-87a5-4163-96f6-2680cb605ce5","Type":"ContainerDied","Data":"54c51cd5395d8f3216fb883edf18205b952190759b456e3269bbce7b39827dd3"} Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.910885 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qm92l_b09d16f2-87a5-4163-96f6-2680cb605ce5/registry-server/0.log" Jan 23 16:56:35 crc kubenswrapper[4606]: I0123 16:56:35.917464 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.011384 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-utilities\") pod \"b09d16f2-87a5-4163-96f6-2680cb605ce5\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.011422 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-catalog-content\") pod \"b09d16f2-87a5-4163-96f6-2680cb605ce5\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.011495 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8tbf\" (UniqueName: \"kubernetes.io/projected/b09d16f2-87a5-4163-96f6-2680cb605ce5-kube-api-access-v8tbf\") pod \"b09d16f2-87a5-4163-96f6-2680cb605ce5\" (UID: \"b09d16f2-87a5-4163-96f6-2680cb605ce5\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.013054 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-utilities" (OuterVolumeSpecName: "utilities") pod "b09d16f2-87a5-4163-96f6-2680cb605ce5" (UID: "b09d16f2-87a5-4163-96f6-2680cb605ce5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.019747 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09d16f2-87a5-4163-96f6-2680cb605ce5-kube-api-access-v8tbf" (OuterVolumeSpecName: "kube-api-access-v8tbf") pod "b09d16f2-87a5-4163-96f6-2680cb605ce5" (UID: "b09d16f2-87a5-4163-96f6-2680cb605ce5"). InnerVolumeSpecName "kube-api-access-v8tbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.030810 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.032301 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.062865 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.122431 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-catalog-content\") pod \"2fcc3539-bb51-4397-9b20-975529f7d4ed\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.122523 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-catalog-content\") pod \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.138986 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-kube-api-access-wxbtb" (OuterVolumeSpecName: "kube-api-access-wxbtb") pod "ac940a5b-420a-40f5-a0a2-fe76fb5f8400" (UID: "ac940a5b-420a-40f5-a0a2-fe76fb5f8400"). InnerVolumeSpecName "kube-api-access-wxbtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.185812 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxbtb\" (UniqueName: \"kubernetes.io/projected/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-kube-api-access-wxbtb\") pod \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.185900 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq678\" (UniqueName: \"kubernetes.io/projected/2fcc3539-bb51-4397-9b20-975529f7d4ed-kube-api-access-xq678\") pod \"2fcc3539-bb51-4397-9b20-975529f7d4ed\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.185934 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-utilities\") pod \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\" (UID: \"ac940a5b-420a-40f5-a0a2-fe76fb5f8400\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.185951 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-utilities\") pod \"2fcc3539-bb51-4397-9b20-975529f7d4ed\" (UID: \"2fcc3539-bb51-4397-9b20-975529f7d4ed\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.185979 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jl4c\" (UniqueName: \"kubernetes.io/projected/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-kube-api-access-6jl4c\") pod \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.186315 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.186329 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxbtb\" (UniqueName: \"kubernetes.io/projected/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-kube-api-access-wxbtb\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.186346 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8tbf\" (UniqueName: \"kubernetes.io/projected/b09d16f2-87a5-4163-96f6-2680cb605ce5-kube-api-access-v8tbf\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.187209 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-utilities" (OuterVolumeSpecName: "utilities") pod "ac940a5b-420a-40f5-a0a2-fe76fb5f8400" (UID: "ac940a5b-420a-40f5-a0a2-fe76fb5f8400"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.187453 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-utilities" (OuterVolumeSpecName: "utilities") pod "2fcc3539-bb51-4397-9b20-975529f7d4ed" (UID: "2fcc3539-bb51-4397-9b20-975529f7d4ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.192903 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fcc3539-bb51-4397-9b20-975529f7d4ed-kube-api-access-xq678" (OuterVolumeSpecName: "kube-api-access-xq678") pod "2fcc3539-bb51-4397-9b20-975529f7d4ed" (UID: "2fcc3539-bb51-4397-9b20-975529f7d4ed"). InnerVolumeSpecName "kube-api-access-xq678". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.235636 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-kube-api-access-6jl4c" (OuterVolumeSpecName: "kube-api-access-6jl4c") pod "8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" (UID: "8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac"). InnerVolumeSpecName "kube-api-access-6jl4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.251627 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac940a5b-420a-40f5-a0a2-fe76fb5f8400" (UID: "ac940a5b-420a-40f5-a0a2-fe76fb5f8400"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.266819 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b09d16f2-87a5-4163-96f6-2680cb605ce5" (UID: "b09d16f2-87a5-4163-96f6-2680cb605ce5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.289258 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-trusted-ca\") pod \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.289680 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" (UID: "8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.289688 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-operator-metrics\") pod \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\" (UID: \"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.289993 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09d16f2-87a5-4163-96f6-2680cb605ce5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.290013 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.290022 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq678\" (UniqueName: \"kubernetes.io/projected/2fcc3539-bb51-4397-9b20-975529f7d4ed-kube-api-access-xq678\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.290035 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac940a5b-420a-40f5-a0a2-fe76fb5f8400-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.290043 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.290052 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jl4c\" (UniqueName: \"kubernetes.io/projected/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-kube-api-access-6jl4c\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.290060 4606 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.302165 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" (UID: "8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.323666 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.366378 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fcc3539-bb51-4397-9b20-975529f7d4ed" (UID: "2fcc3539-bb51-4397-9b20-975529f7d4ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.375852 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416c8a19-3575-4434-813b-184508a997b9" path="/var/lib/kubelet/pods/416c8a19-3575-4434-813b-184508a997b9/volumes" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.390512 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-catalog-content\") pod \"83216fef-4300-422a-93d0-e43bbfd0126c\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.390630 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwxbh\" (UniqueName: \"kubernetes.io/projected/83216fef-4300-422a-93d0-e43bbfd0126c-kube-api-access-wwxbh\") pod \"83216fef-4300-422a-93d0-e43bbfd0126c\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.390715 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-utilities\") pod \"83216fef-4300-422a-93d0-e43bbfd0126c\" (UID: \"83216fef-4300-422a-93d0-e43bbfd0126c\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.390958 4606 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.390977 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc3539-bb51-4397-9b20-975529f7d4ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.391764 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-utilities" (OuterVolumeSpecName: "utilities") pod "83216fef-4300-422a-93d0-e43bbfd0126c" (UID: "83216fef-4300-422a-93d0-e43bbfd0126c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.396347 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83216fef-4300-422a-93d0-e43bbfd0126c-kube-api-access-wwxbh" (OuterVolumeSpecName: "kube-api-access-wwxbh") pod "83216fef-4300-422a-93d0-e43bbfd0126c" (UID: "83216fef-4300-422a-93d0-e43bbfd0126c"). InnerVolumeSpecName "kube-api-access-wwxbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.456259 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83216fef-4300-422a-93d0-e43bbfd0126c" (UID: "83216fef-4300-422a-93d0-e43bbfd0126c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.494258 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.494298 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83216fef-4300-422a-93d0-e43bbfd0126c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.494308 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwxbh\" (UniqueName: \"kubernetes.io/projected/83216fef-4300-422a-93d0-e43bbfd0126c-kube-api-access-wwxbh\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.615267 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qm92l"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.721044 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.796514 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fd4\" (UniqueName: \"kubernetes.io/projected/395d94ec-ce71-4b6a-b503-53501da7cf17-kube-api-access-h9fd4\") pod \"395d94ec-ce71-4b6a-b503-53501da7cf17\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.796563 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-utilities\") pod \"395d94ec-ce71-4b6a-b503-53501da7cf17\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.796643 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-catalog-content\") pod \"395d94ec-ce71-4b6a-b503-53501da7cf17\" (UID: \"395d94ec-ce71-4b6a-b503-53501da7cf17\") " Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.799603 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-utilities" (OuterVolumeSpecName: "utilities") pod "395d94ec-ce71-4b6a-b503-53501da7cf17" (UID: "395d94ec-ce71-4b6a-b503-53501da7cf17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.801003 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395d94ec-ce71-4b6a-b503-53501da7cf17-kube-api-access-h9fd4" (OuterVolumeSpecName: "kube-api-access-h9fd4") pod "395d94ec-ce71-4b6a-b503-53501da7cf17" (UID: "395d94ec-ce71-4b6a-b503-53501da7cf17"). InnerVolumeSpecName "kube-api-access-h9fd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.802214 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qm92l_b09d16f2-87a5-4163-96f6-2680cb605ce5/registry-server/0.log" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.802973 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm92l" event={"ID":"b09d16f2-87a5-4163-96f6-2680cb605ce5","Type":"ContainerDied","Data":"33d390dc9c90342c6cf70648c3705193905faa0604105e0a6b241681e43de0c7"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.803030 4606 scope.go:117] "RemoveContainer" containerID="54c51cd5395d8f3216fb883edf18205b952190759b456e3269bbce7b39827dd3" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.803148 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm92l" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.812571 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sv2zs" event={"ID":"2fcc3539-bb51-4397-9b20-975529f7d4ed","Type":"ContainerDied","Data":"4268118a968bd7b305fdc00d5a9ad3ccfc4882b4f1f61813215fbbae2084d15b"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.813432 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sv2zs" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.826440 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks7wv" event={"ID":"83216fef-4300-422a-93d0-e43bbfd0126c","Type":"ContainerDied","Data":"32410be3a66b1dd531f31cdc42216d7272097bdaa136dbeeea273cd2febc4101"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.826560 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks7wv" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.831797 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qm92l"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.836177 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvbpj" event={"ID":"395d94ec-ce71-4b6a-b503-53501da7cf17","Type":"ContainerDied","Data":"45514ea185d4974cd826e316b02a49166f03396b48730c67efd3cacdb63f765d"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.836349 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvbpj" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.841617 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qm92l"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.845904 4606 scope.go:117] "RemoveContainer" containerID="3ef130053f4cde549fe1752dcb95dcbcc3760f53c4ba7e735e85f79f07d5b8b8" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.849133 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcsj7" event={"ID":"ac940a5b-420a-40f5-a0a2-fe76fb5f8400","Type":"ContainerDied","Data":"7543e243e56850490ceceaa057f65eae81dac07607b95d0a4fd47b675cff4a4e"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.849384 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcsj7" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.855220 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" event={"ID":"9110ffb1-da7c-4a1b-95b4-28f757be8ace","Type":"ContainerStarted","Data":"9433b6beddea9f6dcb24c32a229cdaba021f722863a78a3283372f8711f0910e"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.856367 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.860155 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sv2zs"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.868389 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerStarted","Data":"99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.868466 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wsc4g" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="registry-server" containerID="cri-o://99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2" gracePeriod=30 Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.877570 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" event={"ID":"8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac","Type":"ContainerDied","Data":"49b21411833f170fa6c3694e598bbece8a002db80795d76e006fa6f7a34af1a4"} Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.877872 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzn5r" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.881870 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "395d94ec-ce71-4b6a-b503-53501da7cf17" (UID: "395d94ec-ce71-4b6a-b503-53501da7cf17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.883114 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.899546 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.899578 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9fd4\" (UniqueName: \"kubernetes.io/projected/395d94ec-ce71-4b6a-b503-53501da7cf17-kube-api-access-h9fd4\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.899616 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395d94ec-ce71-4b6a-b503-53501da7cf17-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.908885 4606 scope.go:117] "RemoveContainer" containerID="e69f4e3924e1a7c1a0840570e50e70f71e4acdb85ce4d63199962e8868ba47d4" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.920149 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sv2zs"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.934024 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcsj7"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.936683 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcsj7"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.938435 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" podStartSLOduration=2.938418752 podStartE2EDuration="2.938418752s" podCreationTimestamp="2026-01-23 16:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:56:36.909349589 +0000 UTC m=+230.909541995" watchObservedRunningTime="2026-01-23 16:56:36.938418752 +0000 UTC m=+230.938611158" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.944016 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks7wv"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.946872 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ks7wv"] Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.955667 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wsc4g" podStartSLOduration=4.737767432 podStartE2EDuration="1m17.9556474s" podCreationTimestamp="2026-01-23 16:55:19 +0000 UTC" firstStartedPulling="2026-01-23 16:55:22.54622326 +0000 UTC m=+156.546415666" lastFinishedPulling="2026-01-23 16:56:35.764103228 +0000 UTC m=+229.764295634" observedRunningTime="2026-01-23 16:56:36.952933653 +0000 UTC m=+230.953126059" watchObservedRunningTime="2026-01-23 16:56:36.9556474 +0000 UTC m=+230.955839816" Jan 23 16:56:36 crc kubenswrapper[4606]: I0123 16:56:36.974922 4606 scope.go:117] "RemoveContainer" containerID="a2cdf41df445e1eac6b04bd7080fe69c7bbdc9059988b6aa6b1827e80d0d6152" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.014725 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzn5r"] Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.017238 4606 scope.go:117] "RemoveContainer" containerID="0ba871fec8bf8e088fbeb71d3d0010df8ab8bd4f96bf5ccf8cbfeb96695f6768" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.020328 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzn5r"] Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.064250 4606 scope.go:117] "RemoveContainer" containerID="1c2110b3c415004a9042c687050f15d8e2df78627f73fda53c1f914d3278b790" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.078379 4606 scope.go:117] "RemoveContainer" containerID="453fef7fd3d6de28261d5337427b616ef2c32927d46c2626c7aa2a03bf4c6817" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.099241 4606 scope.go:117] "RemoveContainer" containerID="081cb0f890396a917f4c73201fc59e07c065582ece1c64a53700a225205ad720" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.119374 4606 scope.go:117] "RemoveContainer" containerID="194b171b094850031560ac0c9c8b85fe8fc2db7f470f86ac942c8bac3702c8be" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.165897 4606 scope.go:117] "RemoveContainer" containerID="b1aa2f44cf516ee866c3a0f0aad58c4cbae06f72a6bc092ac17de82f4f382ba2" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.181116 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gvbpj"] Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.187233 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gvbpj"] Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.192763 4606 scope.go:117] "RemoveContainer" containerID="58edb5f5e3b0aca600fe129481ceaf6d0321c9e9155334ad4950f25a89611a46" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.215674 4606 scope.go:117] "RemoveContainer" containerID="d6f6fe87a024f91493889ad34075f4a124631c4a7499800a15d3f3221b21a304" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.250795 4606 scope.go:117] "RemoveContainer" containerID="8f99252f600ebd373690208d3d8db5098974fc8dc2407a68be585c75b8abfec0" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.251390 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wsc4g_f9a9cfbb-677a-458b-bb08-8220e122ccac/registry-server/0.log" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.252209 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.264149 4606 scope.go:117] "RemoveContainer" containerID="2f77163a46cb2488386b5ddac49aeeb0cb06815effcceff65489c905658e5a58" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.286897 4606 scope.go:117] "RemoveContainer" containerID="32163c37fa06b8dec0e21e808158dc33442d2b767306c3a96e63183dcc71547a" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.303868 4606 scope.go:117] "RemoveContainer" containerID="64209d73b38128d55e8ba89e338d5eaf9878fdbe31ac3ed4fd713164d9c2ad58" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.304506 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-catalog-content\") pod \"f9a9cfbb-677a-458b-bb08-8220e122ccac\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.304638 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-utilities\") pod \"f9a9cfbb-677a-458b-bb08-8220e122ccac\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.304956 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrhrc\" (UniqueName: \"kubernetes.io/projected/f9a9cfbb-677a-458b-bb08-8220e122ccac-kube-api-access-xrhrc\") pod \"f9a9cfbb-677a-458b-bb08-8220e122ccac\" (UID: \"f9a9cfbb-677a-458b-bb08-8220e122ccac\") " Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.305898 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-utilities" (OuterVolumeSpecName: "utilities") pod "f9a9cfbb-677a-458b-bb08-8220e122ccac" (UID: "f9a9cfbb-677a-458b-bb08-8220e122ccac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.306129 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.308041 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a9cfbb-677a-458b-bb08-8220e122ccac-kube-api-access-xrhrc" (OuterVolumeSpecName: "kube-api-access-xrhrc") pod "f9a9cfbb-677a-458b-bb08-8220e122ccac" (UID: "f9a9cfbb-677a-458b-bb08-8220e122ccac"). InnerVolumeSpecName "kube-api-access-xrhrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.362106 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9a9cfbb-677a-458b-bb08-8220e122ccac" (UID: "f9a9cfbb-677a-458b-bb08-8220e122ccac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.407914 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a9cfbb-677a-458b-bb08-8220e122ccac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.407958 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrhrc\" (UniqueName: \"kubernetes.io/projected/f9a9cfbb-677a-458b-bb08-8220e122ccac-kube-api-access-xrhrc\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.904161 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wsc4g_f9a9cfbb-677a-458b-bb08-8220e122ccac/registry-server/0.log" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.905216 4606 generic.go:334] "Generic (PLEG): container finished" podID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerID="99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2" exitCode=1 Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.905873 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsc4g" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.909696 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerDied","Data":"99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2"} Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.909736 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsc4g" event={"ID":"f9a9cfbb-677a-458b-bb08-8220e122ccac","Type":"ContainerDied","Data":"72db072a074645acefe4bbfd843c5d7dee49a2f60cb19351106ab16295cc7230"} Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.909758 4606 scope.go:117] "RemoveContainer" containerID="99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.944157 4606 scope.go:117] "RemoveContainer" containerID="df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.945932 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsc4g"] Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.948779 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wsc4g"] Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.959943 4606 scope.go:117] "RemoveContainer" containerID="4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.972132 4606 scope.go:117] "RemoveContainer" containerID="99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2" Jan 23 16:56:37 crc kubenswrapper[4606]: E0123 16:56:37.972530 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2\": container with ID starting with 99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2 not found: ID does not exist" containerID="99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.972562 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2"} err="failed to get container status \"99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2\": rpc error: code = NotFound desc = could not find container \"99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2\": container with ID starting with 99d8b2e61692591fd4ed50f2dcb3dbb418165e800dd7c363915a2af9c24e09e2 not found: ID does not exist" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.972598 4606 scope.go:117] "RemoveContainer" containerID="df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb" Jan 23 16:56:37 crc kubenswrapper[4606]: E0123 16:56:37.972976 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb\": container with ID starting with df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb not found: ID does not exist" containerID="df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.972995 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb"} err="failed to get container status \"df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb\": rpc error: code = NotFound desc = could not find container \"df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb\": container with ID starting with df420ebd8370c1db8c306ae3c12c6fed703675621faecced948691a63bec32fb not found: ID does not exist" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.973009 4606 scope.go:117] "RemoveContainer" containerID="4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105" Jan 23 16:56:37 crc kubenswrapper[4606]: E0123 16:56:37.973189 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105\": container with ID starting with 4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105 not found: ID does not exist" containerID="4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105" Jan 23 16:56:37 crc kubenswrapper[4606]: I0123 16:56:37.973207 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105"} err="failed to get container status \"4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105\": rpc error: code = NotFound desc = could not find container \"4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105\": container with ID starting with 4238e2c3b81be263ff24c1d0c04bad780226290519b761f94b58f7e00fb23105 not found: ID does not exist" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021315 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fkgdl"] Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021547 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021562 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021600 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021607 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021613 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021620 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021629 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021634 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021643 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021649 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021658 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021664 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021675 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021683 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021693 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021701 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021709 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021716 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021725 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021732 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021741 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021750 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021757 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021763 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021769 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021775 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021785 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021792 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021804 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021810 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="extract-content" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021818 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021827 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021836 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021844 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021854 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021860 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: E0123 16:56:38.021870 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021877 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="extract-utilities" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021982 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.021993 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.022003 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.022012 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.022029 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.022040 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" containerName="registry-server" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.022049 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" containerName="marketplace-operator" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.022862 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.032025 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.035718 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkgdl"] Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.120368 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-utilities\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.120408 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-catalog-content\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.120481 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz4jd\" (UniqueName: \"kubernetes.io/projected/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-kube-api-access-vz4jd\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.221837 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz4jd\" (UniqueName: \"kubernetes.io/projected/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-kube-api-access-vz4jd\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.221921 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-utilities\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.221965 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-catalog-content\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.222922 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-catalog-content\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.223478 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-utilities\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.240363 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz4jd\" (UniqueName: \"kubernetes.io/projected/06d9e047-39b3-4c22-ac66-dfe3e5d215b3-kube-api-access-vz4jd\") pod \"certified-operators-fkgdl\" (UID: \"06d9e047-39b3-4c22-ac66-dfe3e5d215b3\") " pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.339911 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.371514 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fcc3539-bb51-4397-9b20-975529f7d4ed" path="/var/lib/kubelet/pods/2fcc3539-bb51-4397-9b20-975529f7d4ed/volumes" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.372258 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395d94ec-ce71-4b6a-b503-53501da7cf17" path="/var/lib/kubelet/pods/395d94ec-ce71-4b6a-b503-53501da7cf17/volumes" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.372970 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83216fef-4300-422a-93d0-e43bbfd0126c" path="/var/lib/kubelet/pods/83216fef-4300-422a-93d0-e43bbfd0126c/volumes" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.374325 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac" path="/var/lib/kubelet/pods/8f5f53ec-46cf-4c08-a47c-44ebc6afe8ac/volumes" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.374939 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac940a5b-420a-40f5-a0a2-fe76fb5f8400" path="/var/lib/kubelet/pods/ac940a5b-420a-40f5-a0a2-fe76fb5f8400/volumes" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.376224 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09d16f2-87a5-4163-96f6-2680cb605ce5" path="/var/lib/kubelet/pods/b09d16f2-87a5-4163-96f6-2680cb605ce5/volumes" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.376908 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a9cfbb-677a-458b-bb08-8220e122ccac" path="/var/lib/kubelet/pods/f9a9cfbb-677a-458b-bb08-8220e122ccac/volumes" Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.736401 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkgdl"] Jan 23 16:56:38 crc kubenswrapper[4606]: I0123 16:56:38.910981 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkgdl" event={"ID":"06d9e047-39b3-4c22-ac66-dfe3e5d215b3","Type":"ContainerStarted","Data":"9a63daf9f9d7fdafd1bd48c7704a578ba9548d77414d34d451a51e585eeeba23"} Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.025308 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmtg9"] Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.026285 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.029176 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.040665 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmtg9"] Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.138767 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwnl\" (UniqueName: \"kubernetes.io/projected/bcf0df0a-2bd4-4927-b81a-056a26b821b2-kube-api-access-vfwnl\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.139150 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf0df0a-2bd4-4927-b81a-056a26b821b2-utilities\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.139287 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf0df0a-2bd4-4927-b81a-056a26b821b2-catalog-content\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.240236 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf0df0a-2bd4-4927-b81a-056a26b821b2-utilities\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.240511 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf0df0a-2bd4-4927-b81a-056a26b821b2-catalog-content\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.240640 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfwnl\" (UniqueName: \"kubernetes.io/projected/bcf0df0a-2bd4-4927-b81a-056a26b821b2-kube-api-access-vfwnl\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.241005 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf0df0a-2bd4-4927-b81a-056a26b821b2-catalog-content\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.241319 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf0df0a-2bd4-4927-b81a-056a26b821b2-utilities\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.265940 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfwnl\" (UniqueName: \"kubernetes.io/projected/bcf0df0a-2bd4-4927-b81a-056a26b821b2-kube-api-access-vfwnl\") pod \"redhat-marketplace-nmtg9\" (UID: \"bcf0df0a-2bd4-4927-b81a-056a26b821b2\") " pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.358010 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.763128 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmtg9"] Jan 23 16:56:39 crc kubenswrapper[4606]: I0123 16:56:39.922486 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmtg9" event={"ID":"bcf0df0a-2bd4-4927-b81a-056a26b821b2","Type":"ContainerStarted","Data":"0de3bdc1536ef1a1b62d4406f29b85147115599e1ce3ac026e4dd643ba33b54d"} Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.422212 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-65d6z"] Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.430690 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.432777 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.433417 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65d6z"] Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.455826 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ldz\" (UniqueName: \"kubernetes.io/projected/98f34be0-cadc-459c-ae73-eec312f1b1bd-kube-api-access-45ldz\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.455863 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f34be0-cadc-459c-ae73-eec312f1b1bd-catalog-content\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.455924 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f34be0-cadc-459c-ae73-eec312f1b1bd-utilities\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.556878 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45ldz\" (UniqueName: \"kubernetes.io/projected/98f34be0-cadc-459c-ae73-eec312f1b1bd-kube-api-access-45ldz\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.556927 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f34be0-cadc-459c-ae73-eec312f1b1bd-catalog-content\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.556981 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f34be0-cadc-459c-ae73-eec312f1b1bd-utilities\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.557384 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f34be0-cadc-459c-ae73-eec312f1b1bd-utilities\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.557890 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f34be0-cadc-459c-ae73-eec312f1b1bd-catalog-content\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.574568 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45ldz\" (UniqueName: \"kubernetes.io/projected/98f34be0-cadc-459c-ae73-eec312f1b1bd-kube-api-access-45ldz\") pod \"community-operators-65d6z\" (UID: \"98f34be0-cadc-459c-ae73-eec312f1b1bd\") " pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.635750 4606 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-whw2p container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.635810 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" podUID="9e9e2c8c-5d03-48b0-ab9c-110eeef212b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 16:56:40 crc kubenswrapper[4606]: I0123 16:56:40.755998 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.175892 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65d6z"] Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.419990 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w6dmk"] Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.421427 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.424504 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.435210 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6dmk"] Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.466891 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0ff184-9aa5-4873-8c4c-eec703f8f541-catalog-content\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.466941 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0ff184-9aa5-4873-8c4c-eec703f8f541-utilities\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.466967 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsqf\" (UniqueName: \"kubernetes.io/projected/0b0ff184-9aa5-4873-8c4c-eec703f8f541-kube-api-access-jvsqf\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.568877 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0ff184-9aa5-4873-8c4c-eec703f8f541-catalog-content\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.568936 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0ff184-9aa5-4873-8c4c-eec703f8f541-utilities\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.568959 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvsqf\" (UniqueName: \"kubernetes.io/projected/0b0ff184-9aa5-4873-8c4c-eec703f8f541-kube-api-access-jvsqf\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.569565 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0ff184-9aa5-4873-8c4c-eec703f8f541-catalog-content\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.569615 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0ff184-9aa5-4873-8c4c-eec703f8f541-utilities\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.595537 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvsqf\" (UniqueName: \"kubernetes.io/projected/0b0ff184-9aa5-4873-8c4c-eec703f8f541-kube-api-access-jvsqf\") pod \"redhat-operators-w6dmk\" (UID: \"0b0ff184-9aa5-4873-8c4c-eec703f8f541\") " pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.737102 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:41 crc kubenswrapper[4606]: I0123 16:56:41.934817 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65d6z" event={"ID":"98f34be0-cadc-459c-ae73-eec312f1b1bd","Type":"ContainerStarted","Data":"d96e5f4601d8588f6adeaffd9d3b71f84ce87664fb0fc00befa29174a28b0664"} Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.182602 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6dmk"] Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.941799 4606 generic.go:334] "Generic (PLEG): container finished" podID="98f34be0-cadc-459c-ae73-eec312f1b1bd" containerID="a63c383536b68e50475aebfbb402a6a22d70fff3c3c8f30d99459686dc03f7cd" exitCode=0 Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.941904 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65d6z" event={"ID":"98f34be0-cadc-459c-ae73-eec312f1b1bd","Type":"ContainerDied","Data":"a63c383536b68e50475aebfbb402a6a22d70fff3c3c8f30d99459686dc03f7cd"} Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.946916 4606 generic.go:334] "Generic (PLEG): container finished" podID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" containerID="ba7e66cb7730d0f4d5ed45427fc6e7f2268e69f0a13ffc3b37a07892d29784db" exitCode=0 Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.947015 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkgdl" event={"ID":"06d9e047-39b3-4c22-ac66-dfe3e5d215b3","Type":"ContainerDied","Data":"ba7e66cb7730d0f4d5ed45427fc6e7f2268e69f0a13ffc3b37a07892d29784db"} Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.953349 4606 generic.go:334] "Generic (PLEG): container finished" podID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" containerID="b6f1712732ed8065d58a606ce6a9bb2641c9c3b118fd082ba57019d1b6759040" exitCode=0 Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.953441 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmtg9" event={"ID":"bcf0df0a-2bd4-4927-b81a-056a26b821b2","Type":"ContainerDied","Data":"b6f1712732ed8065d58a606ce6a9bb2641c9c3b118fd082ba57019d1b6759040"} Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.957986 4606 generic.go:334] "Generic (PLEG): container finished" podID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" containerID="7be871a2e7e3071558c1e659497c13eb04a2ea86428c9ba6391a7200cf98bf77" exitCode=0 Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.958095 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6dmk" event={"ID":"0b0ff184-9aa5-4873-8c4c-eec703f8f541","Type":"ContainerDied","Data":"7be871a2e7e3071558c1e659497c13eb04a2ea86428c9ba6391a7200cf98bf77"} Jan 23 16:56:42 crc kubenswrapper[4606]: I0123 16:56:42.958147 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6dmk" event={"ID":"0b0ff184-9aa5-4873-8c4c-eec703f8f541","Type":"ContainerStarted","Data":"369fa9a9555b38133801b5837ffad0fb65d02c051401362f54216c4efc52bbf1"} Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.985063 4606 generic.go:334] "Generic (PLEG): container finished" podID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" containerID="3e59b61f1c3515a3c0d98e1ed3594b41bb5fe82c05796a596d66b78fa81ef4ed" exitCode=0 Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.985377 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkgdl" event={"ID":"06d9e047-39b3-4c22-ac66-dfe3e5d215b3","Type":"ContainerDied","Data":"3e59b61f1c3515a3c0d98e1ed3594b41bb5fe82c05796a596d66b78fa81ef4ed"} Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.989364 4606 generic.go:334] "Generic (PLEG): container finished" podID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" containerID="6e2ae8347cb215bc61c05c02de96c44df7b48daaf7a1884470c6822527a4d2cf" exitCode=0 Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.989412 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmtg9" event={"ID":"bcf0df0a-2bd4-4927-b81a-056a26b821b2","Type":"ContainerDied","Data":"6e2ae8347cb215bc61c05c02de96c44df7b48daaf7a1884470c6822527a4d2cf"} Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.994741 4606 generic.go:334] "Generic (PLEG): container finished" podID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" containerID="054291bc2cea16c479726eedf2f5914099a9930309b3a799b065beae32fe210b" exitCode=0 Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.994848 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6dmk" event={"ID":"0b0ff184-9aa5-4873-8c4c-eec703f8f541","Type":"ContainerDied","Data":"054291bc2cea16c479726eedf2f5914099a9930309b3a799b065beae32fe210b"} Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.996395 4606 generic.go:334] "Generic (PLEG): container finished" podID="98f34be0-cadc-459c-ae73-eec312f1b1bd" containerID="f8ff91863f920dc0f982d164ee62e08b2ed60f958fc2e56b7a11b3a24c7e4577" exitCode=0 Jan 23 16:56:44 crc kubenswrapper[4606]: I0123 16:56:44.996423 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65d6z" event={"ID":"98f34be0-cadc-459c-ae73-eec312f1b1bd","Type":"ContainerDied","Data":"f8ff91863f920dc0f982d164ee62e08b2ed60f958fc2e56b7a11b3a24c7e4577"} Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.004460 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6dmk" event={"ID":"0b0ff184-9aa5-4873-8c4c-eec703f8f541","Type":"ContainerStarted","Data":"edf1a53d0d1d7bec3dc3482454ebc6c0043f868d45f068927063edcc7d848ce4"} Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.011992 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65d6z" event={"ID":"98f34be0-cadc-459c-ae73-eec312f1b1bd","Type":"ContainerStarted","Data":"4a0e684c66abae7c112b3c05fb660155032d2fbf39f2fb0f3f91791f108216f9"} Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.017068 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkgdl" event={"ID":"06d9e047-39b3-4c22-ac66-dfe3e5d215b3","Type":"ContainerStarted","Data":"c0f966bc68d0cf50b37743dbe643ba4bef7316935e387f4c572556b735401c55"} Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.020416 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmtg9" event={"ID":"bcf0df0a-2bd4-4927-b81a-056a26b821b2","Type":"ContainerStarted","Data":"5d634811940e2e601ce546189a7ebc1fb43b3f0ee7c3bf816e4dec0aae89af80"} Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.028441 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w6dmk" podStartSLOduration=2.446208868 podStartE2EDuration="5.028422419s" podCreationTimestamp="2026-01-23 16:56:41 +0000 UTC" firstStartedPulling="2026-01-23 16:56:42.968056027 +0000 UTC m=+236.968248433" lastFinishedPulling="2026-01-23 16:56:45.550269578 +0000 UTC m=+239.550461984" observedRunningTime="2026-01-23 16:56:46.02563658 +0000 UTC m=+240.025829016" watchObservedRunningTime="2026-01-23 16:56:46.028422419 +0000 UTC m=+240.028614825" Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.069780 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fkgdl" podStartSLOduration=5.6171743979999995 podStartE2EDuration="8.069757139s" podCreationTimestamp="2026-01-23 16:56:38 +0000 UTC" firstStartedPulling="2026-01-23 16:56:42.950570571 +0000 UTC m=+236.950762977" lastFinishedPulling="2026-01-23 16:56:45.403153302 +0000 UTC m=+239.403345718" observedRunningTime="2026-01-23 16:56:46.048267931 +0000 UTC m=+240.048460337" watchObservedRunningTime="2026-01-23 16:56:46.069757139 +0000 UTC m=+240.069949555" Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.091865 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-65d6z" podStartSLOduration=3.574032397 podStartE2EDuration="6.091836574s" podCreationTimestamp="2026-01-23 16:56:40 +0000 UTC" firstStartedPulling="2026-01-23 16:56:42.943207363 +0000 UTC m=+236.943399769" lastFinishedPulling="2026-01-23 16:56:45.46101154 +0000 UTC m=+239.461203946" observedRunningTime="2026-01-23 16:56:46.070410538 +0000 UTC m=+240.070602944" watchObservedRunningTime="2026-01-23 16:56:46.091836574 +0000 UTC m=+240.092028980" Jan 23 16:56:46 crc kubenswrapper[4606]: I0123 16:56:46.093697 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmtg9" podStartSLOduration=4.556276243 podStartE2EDuration="7.093685556s" podCreationTimestamp="2026-01-23 16:56:39 +0000 UTC" firstStartedPulling="2026-01-23 16:56:42.95441815 +0000 UTC m=+236.954610556" lastFinishedPulling="2026-01-23 16:56:45.491827463 +0000 UTC m=+239.492019869" observedRunningTime="2026-01-23 16:56:46.092553914 +0000 UTC m=+240.092746330" watchObservedRunningTime="2026-01-23 16:56:46.093685556 +0000 UTC m=+240.093877962" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.340203 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.340531 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.414215 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.945133 4606 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.945899 4606 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.946023 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.946206 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32" gracePeriod=15 Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.946253 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135" gracePeriod=15 Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.946257 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc" gracePeriod=15 Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.946221 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39" gracePeriod=15 Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.946307 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48" gracePeriod=15 Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.947175 4606 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:56:48 crc kubenswrapper[4606]: E0123 16:56:48.947395 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948176 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 16:56:48 crc kubenswrapper[4606]: E0123 16:56:48.948200 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948208 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 16:56:48 crc kubenswrapper[4606]: E0123 16:56:48.948219 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948226 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:56:48 crc kubenswrapper[4606]: E0123 16:56:48.948235 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948245 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:56:48 crc kubenswrapper[4606]: E0123 16:56:48.948254 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948261 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 16:56:48 crc kubenswrapper[4606]: E0123 16:56:48.948270 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948276 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 16:56:48 crc kubenswrapper[4606]: E0123 16:56:48.948290 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948296 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948414 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948427 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948438 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948446 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948456 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.948466 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 16:56:48 crc kubenswrapper[4606]: I0123 16:56:48.994757 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.065244 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.065514 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.065637 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.065736 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.065848 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.065974 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.066102 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.066207 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.167842 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.167965 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168231 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168253 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168320 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168322 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168351 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168383 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168403 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168410 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168432 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168450 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168490 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168510 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168537 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.168609 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.313795 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:56:49 crc kubenswrapper[4606]: E0123 16:56:49.338668 4606 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d6a9342aec04c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 16:56:49.337958476 +0000 UTC m=+243.338150882,LastTimestamp:2026-01-23 16:56:49.337958476 +0000 UTC m=+243.338150882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.359017 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.359082 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.402233 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.402831 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.403191 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:49 crc kubenswrapper[4606]: I0123 16:56:49.403512 4606 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.046308 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.047923 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.048723 4606 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39" exitCode=0 Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.048823 4606 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48" exitCode=0 Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.048911 4606 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc" exitCode=0 Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.048993 4606 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135" exitCode=2 Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.049114 4606 scope.go:117] "RemoveContainer" containerID="5182305963d85fa1e7837a8208de7a988b226b973c49c8e7066a6c5a3d143020" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.059625 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5c873bff329e1cdf7d87169495a73aa81e942ab41b8cc20439afac55c24e2790"} Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.066231 4606 generic.go:334] "Generic (PLEG): container finished" podID="778ea821-b161-4045-b9bc-78224119964a" containerID="25b93b2ac515b31a68af9110421850c0d1df8c20d1e0ecb12c36bf52a7d2a5d1" exitCode=0 Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.067108 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"778ea821-b161-4045-b9bc-78224119964a","Type":"ContainerDied","Data":"25b93b2ac515b31a68af9110421850c0d1df8c20d1e0ecb12c36bf52a7d2a5d1"} Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.068090 4606 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.068545 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.068868 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.071959 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: E0123 16:56:50.104296 4606 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d6a9342aec04c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 16:56:49.337958476 +0000 UTC m=+243.338150882,LastTimestamp:2026-01-23 16:56:49.337958476 +0000 UTC m=+243.338150882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.124193 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmtg9" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.125061 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.125448 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.125733 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.126001 4606 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.756286 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.758067 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.797855 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.798480 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.798825 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.799691 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:50 crc kubenswrapper[4606]: I0123 16:56:50.799953 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.079540 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e"} Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.299766 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-65d6z" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.300489 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.301013 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.301726 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.302040 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.604810 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.605695 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.606301 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.607309 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.608229 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.700011 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-kubelet-dir\") pod \"778ea821-b161-4045-b9bc-78224119964a\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.700100 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-var-lock\") pod \"778ea821-b161-4045-b9bc-78224119964a\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.700134 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/778ea821-b161-4045-b9bc-78224119964a-kube-api-access\") pod \"778ea821-b161-4045-b9bc-78224119964a\" (UID: \"778ea821-b161-4045-b9bc-78224119964a\") " Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.700197 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-var-lock" (OuterVolumeSpecName: "var-lock") pod "778ea821-b161-4045-b9bc-78224119964a" (UID: "778ea821-b161-4045-b9bc-78224119964a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.700245 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "778ea821-b161-4045-b9bc-78224119964a" (UID: "778ea821-b161-4045-b9bc-78224119964a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.700420 4606 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.700437 4606 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/778ea821-b161-4045-b9bc-78224119964a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.709180 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778ea821-b161-4045-b9bc-78224119964a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "778ea821-b161-4045-b9bc-78224119964a" (UID: "778ea821-b161-4045-b9bc-78224119964a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.737665 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.737796 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:56:51 crc kubenswrapper[4606]: I0123 16:56:51.801831 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/778ea821-b161-4045-b9bc-78224119964a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.027112 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.028177 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.028711 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.028937 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.029130 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.029322 4606 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.029501 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.086190 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.086181 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"778ea821-b161-4045-b9bc-78224119964a","Type":"ContainerDied","Data":"c2de7cfc822634a22646302036afff9f2a25b00a51919b284c52676f46b1e942"} Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.086620 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2de7cfc822634a22646302036afff9f2a25b00a51919b284c52676f46b1e942" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.091260 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.092130 4606 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32" exitCode=0 Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.092251 4606 scope.go:117] "RemoveContainer" containerID="d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.092481 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.097655 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.098641 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.099140 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.099306 4606 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.099919 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.100778 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.100928 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.101117 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.101344 4606 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.101643 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.106903 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.106951 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.107021 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.107892 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.107926 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.107943 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.108664 4606 scope.go:117] "RemoveContainer" containerID="2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.130049 4606 scope.go:117] "RemoveContainer" containerID="4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.144830 4606 scope.go:117] "RemoveContainer" containerID="6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.162801 4606 scope.go:117] "RemoveContainer" containerID="6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.180404 4606 scope.go:117] "RemoveContainer" containerID="303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.209580 4606 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.209661 4606 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.209671 4606 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.217712 4606 scope.go:117] "RemoveContainer" containerID="d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39" Jan 23 16:56:52 crc kubenswrapper[4606]: E0123 16:56:52.218257 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\": container with ID starting with d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39 not found: ID does not exist" containerID="d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.218303 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39"} err="failed to get container status \"d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\": rpc error: code = NotFound desc = could not find container \"d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39\": container with ID starting with d506a4b4eeb66a7d916d667980e2add04adb6463c1b10e8331d8895172081a39 not found: ID does not exist" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.218328 4606 scope.go:117] "RemoveContainer" containerID="2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48" Jan 23 16:56:52 crc kubenswrapper[4606]: E0123 16:56:52.219138 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\": container with ID starting with 2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48 not found: ID does not exist" containerID="2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.219173 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48"} err="failed to get container status \"2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\": rpc error: code = NotFound desc = could not find container \"2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48\": container with ID starting with 2d655a882aece9c3e05da26032dcf9437a0c4f306262b144e0d38674be4c1f48 not found: ID does not exist" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.219197 4606 scope.go:117] "RemoveContainer" containerID="4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc" Jan 23 16:56:52 crc kubenswrapper[4606]: E0123 16:56:52.219507 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\": container with ID starting with 4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc not found: ID does not exist" containerID="4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.219540 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc"} err="failed to get container status \"4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\": rpc error: code = NotFound desc = could not find container \"4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc\": container with ID starting with 4c5a269d46c563b22a0bd9b2163d87b15fb233076de435819d70c0f1f07a66cc not found: ID does not exist" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.219560 4606 scope.go:117] "RemoveContainer" containerID="6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135" Jan 23 16:56:52 crc kubenswrapper[4606]: E0123 16:56:52.219844 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\": container with ID starting with 6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135 not found: ID does not exist" containerID="6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.219863 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135"} err="failed to get container status \"6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\": rpc error: code = NotFound desc = could not find container \"6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135\": container with ID starting with 6df3d043fe2fc8545fe39f3720e5d7d7ca3958299841737ca84de0ffd3597135 not found: ID does not exist" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.219877 4606 scope.go:117] "RemoveContainer" containerID="6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32" Jan 23 16:56:52 crc kubenswrapper[4606]: E0123 16:56:52.220273 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\": container with ID starting with 6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32 not found: ID does not exist" containerID="6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.220304 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32"} err="failed to get container status \"6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\": rpc error: code = NotFound desc = could not find container \"6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32\": container with ID starting with 6fc0f65461dbd186bef1d8f3f0f73e1b8f07a78ad3c53dedaa62c55f01f9ed32 not found: ID does not exist" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.220317 4606 scope.go:117] "RemoveContainer" containerID="303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa" Jan 23 16:56:52 crc kubenswrapper[4606]: E0123 16:56:52.220610 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\": container with ID starting with 303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa not found: ID does not exist" containerID="303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.220640 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa"} err="failed to get container status \"303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\": rpc error: code = NotFound desc = could not find container \"303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa\": container with ID starting with 303d74877f302be70092cf70b76bd6057267e967ad20e05ea0ed4b943429bdfa not found: ID does not exist" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.379087 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 16:56:52 crc kubenswrapper[4606]: E0123 16:56:52.398796 4606 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.22:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" volumeName="registry-storage" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.407662 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.408828 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.409344 4606 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.409575 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.409811 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:52 crc kubenswrapper[4606]: I0123 16:56:52.778501 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w6dmk" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" containerName="registry-server" probeResult="failure" output=< Jan 23 16:56:52 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 16:56:52 crc kubenswrapper[4606]: > Jan 23 16:56:56 crc kubenswrapper[4606]: I0123 16:56:56.368675 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:56 crc kubenswrapper[4606]: I0123 16:56:56.369756 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:56 crc kubenswrapper[4606]: I0123 16:56:56.370741 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:56 crc kubenswrapper[4606]: I0123 16:56:56.371180 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: I0123 16:56:58.389893 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fkgdl" Jan 23 16:56:58 crc kubenswrapper[4606]: I0123 16:56:58.390436 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: I0123 16:56:58.390849 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: I0123 16:56:58.392110 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: I0123 16:56:58.392402 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: I0123 16:56:58.392625 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: E0123 16:56:58.772380 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: E0123 16:56:58.773088 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: E0123 16:56:58.773616 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: E0123 16:56:58.773988 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: E0123 16:56:58.774671 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:56:58 crc kubenswrapper[4606]: I0123 16:56:58.774745 4606 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 16:56:58 crc kubenswrapper[4606]: E0123 16:56:58.775072 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="200ms" Jan 23 16:56:58 crc kubenswrapper[4606]: E0123 16:56:58.976006 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="400ms" Jan 23 16:56:59 crc kubenswrapper[4606]: E0123 16:56:59.376700 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="800ms" Jan 23 16:57:00 crc kubenswrapper[4606]: E0123 16:57:00.105544 4606 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d6a9342aec04c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 16:56:49.337958476 +0000 UTC m=+243.338150882,LastTimestamp:2026-01-23 16:56:49.337958476 +0000 UTC m=+243.338150882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 16:57:00 crc kubenswrapper[4606]: E0123 16:57:00.177762 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="1.6s" Jan 23 16:57:00 crc kubenswrapper[4606]: I0123 16:57:00.557770 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" containerName="oauth-openshift" containerID="cri-o://b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9" gracePeriod=15 Jan 23 16:57:01 crc kubenswrapper[4606]: E0123 16:57:01.779555 4606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="3.2s" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.782510 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.784994 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.785580 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.786018 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.786317 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.786693 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.786944 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.818701 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w6dmk" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.819262 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.819765 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.820352 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.820681 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.820950 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:01 crc kubenswrapper[4606]: I0123 16:57:01.821250 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.092995 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.093945 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.094366 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.094686 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.094933 4606 status_manager.go:851] "Failed to get status for pod" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vfv9l\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.095199 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.095553 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.095980 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.139918 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-dir\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.139986 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-session\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140040 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-error\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140041 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140062 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxjr8\" (UniqueName: \"kubernetes.io/projected/dc495cd6-f31a-4b33-9c14-12b37c25f47c-kube-api-access-qxjr8\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140145 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-idp-0-file-data\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140185 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-cliconfig\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140225 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-provider-selection\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140255 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-serving-cert\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140271 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-router-certs\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140288 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-trusted-ca-bundle\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140316 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-policies\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140331 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-login\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140362 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-service-ca\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140384 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-ocp-branding-template\") pod \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\" (UID: \"dc495cd6-f31a-4b33-9c14-12b37c25f47c\") " Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.140674 4606 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.141086 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.142309 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.145748 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.148047 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.149628 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.150663 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.150923 4606 generic.go:334] "Generic (PLEG): container finished" podID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" containerID="b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9" exitCode=0 Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.150996 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" event={"ID":"dc495cd6-f31a-4b33-9c14-12b37c25f47c","Type":"ContainerDied","Data":"b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9"} Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.151034 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" event={"ID":"dc495cd6-f31a-4b33-9c14-12b37c25f47c","Type":"ContainerDied","Data":"ae87e16d03dd885dffb0b39fe6e2846bc4a46a12c3fa7969379f27d3f6d8ed8b"} Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.151061 4606 scope.go:117] "RemoveContainer" containerID="b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.151190 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.152905 4606 status_manager.go:851] "Failed to get status for pod" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vfv9l\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.153281 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.153720 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.154394 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc495cd6-f31a-4b33-9c14-12b37c25f47c-kube-api-access-qxjr8" (OuterVolumeSpecName: "kube-api-access-qxjr8") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "kube-api-access-qxjr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.154548 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.154554 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.155066 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.155334 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.155499 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.155581 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.159844 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.160146 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.160693 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.160841 4606 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96" exitCode=1 Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.160883 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96"} Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.161867 4606 scope.go:117] "RemoveContainer" containerID="a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.162704 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.168125 4606 status_manager.go:851] "Failed to get status for pod" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vfv9l\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.168644 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.169429 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dc495cd6-f31a-4b33-9c14-12b37c25f47c" (UID: "dc495cd6-f31a-4b33-9c14-12b37c25f47c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.169532 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.174745 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.175204 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.175416 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.175617 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.175787 4606 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.231416 4606 scope.go:117] "RemoveContainer" containerID="b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9" Jan 23 16:57:02 crc kubenswrapper[4606]: E0123 16:57:02.231951 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9\": container with ID starting with b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9 not found: ID does not exist" containerID="b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.231988 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9"} err="failed to get container status \"b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9\": rpc error: code = NotFound desc = could not find container \"b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9\": container with ID starting with b563215aac8efe042e7223b8440896d50b098dd71a92f7eb661f8f93cfc24cc9 not found: ID does not exist" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.242954 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.243856 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.243899 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.243985 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244065 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxjr8\" (UniqueName: \"kubernetes.io/projected/dc495cd6-f31a-4b33-9c14-12b37c25f47c-kube-api-access-qxjr8\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244084 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244096 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244158 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244176 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244188 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244249 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244266 4606 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc495cd6-f31a-4b33-9c14-12b37c25f47c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.244277 4606 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dc495cd6-f31a-4b33-9c14-12b37c25f47c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.308126 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.455536 4606 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.456354 4606 status_manager.go:851] "Failed to get status for pod" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vfv9l\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.456842 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.457211 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.457486 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.457756 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.458023 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:02 crc kubenswrapper[4606]: I0123 16:57:02.458271 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.169793 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.170819 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a402b82855acb2e82d07fb32eb7ed90ca465305d6fbd7e170f327f68b803be5f"} Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.171672 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.171991 4606 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.172301 4606 status_manager.go:851] "Failed to get status for pod" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vfv9l\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.172637 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.172943 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.173260 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.173544 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.173887 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.365532 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.366123 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.366365 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.366675 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.366934 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.367207 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.367646 4606 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.367952 4606 status_manager.go:851] "Failed to get status for pod" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vfv9l\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.368361 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.378128 4606 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.378154 4606 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:03 crc kubenswrapper[4606]: E0123 16:57:03.378485 4606 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.379684 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:03 crc kubenswrapper[4606]: W0123 16:57:03.399725 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-93024f89a991bffc0125d3240c0e11ca82b66e5b2a429a06573c28afea274082 WatchSource:0}: Error finding container 93024f89a991bffc0125d3240c0e11ca82b66e5b2a429a06573c28afea274082: Status 404 returned error can't find the container with id 93024f89a991bffc0125d3240c0e11ca82b66e5b2a429a06573c28afea274082 Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.517348 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.517767 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 16:57:03 crc kubenswrapper[4606]: I0123 16:57:03.518151 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.182179 4606 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="348d773789798c8de4fc6e781861faebcff5dc0944871b933edb7c9f97208d4d" exitCode=0 Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.182291 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"348d773789798c8de4fc6e781861faebcff5dc0944871b933edb7c9f97208d4d"} Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.182739 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"93024f89a991bffc0125d3240c0e11ca82b66e5b2a429a06573c28afea274082"} Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.182996 4606 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.183016 4606 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:04 crc kubenswrapper[4606]: E0123 16:57:04.183414 4606 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.183430 4606 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.183690 4606 status_manager.go:851] "Failed to get status for pod" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" pod="openshift-authentication/oauth-openshift-558db77b4-vfv9l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-vfv9l\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.183916 4606 status_manager.go:851] "Failed to get status for pod" podUID="06d9e047-39b3-4c22-ac66-dfe3e5d215b3" pod="openshift-marketplace/certified-operators-fkgdl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fkgdl\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.184164 4606 status_manager.go:851] "Failed to get status for pod" podUID="98f34be0-cadc-459c-ae73-eec312f1b1bd" pod="openshift-marketplace/community-operators-65d6z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-65d6z\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.184478 4606 status_manager.go:851] "Failed to get status for pod" podUID="778ea821-b161-4045-b9bc-78224119964a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.184859 4606 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.185518 4606 status_manager.go:851] "Failed to get status for pod" podUID="0b0ff184-9aa5-4873-8c4c-eec703f8f541" pod="openshift-marketplace/redhat-operators-w6dmk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-w6dmk\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:04 crc kubenswrapper[4606]: I0123 16:57:04.185875 4606 status_manager.go:851] "Failed to get status for pod" podUID="bcf0df0a-2bd4-4927-b81a-056a26b821b2" pod="openshift-marketplace/redhat-marketplace-nmtg9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nmtg9\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 23 16:57:05 crc kubenswrapper[4606]: I0123 16:57:05.192452 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5d4c8bc6129db9ead141b26134434b4a4bd407eb83f3b58c138c3ae63d3808ad"} Jan 23 16:57:05 crc kubenswrapper[4606]: I0123 16:57:05.192499 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"da5eed4ac5f2c82001366544a6fa51a4ad51595640b356a149f7c9aa7f13ef5c"} Jan 23 16:57:05 crc kubenswrapper[4606]: I0123 16:57:05.192511 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"95393480884a4c0d90a96319a7cd75a03c73dffce5130a815846ef7d72247db4"} Jan 23 16:57:06 crc kubenswrapper[4606]: I0123 16:57:06.199735 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b6d5e32ab39e2316794fa381a4262b1b83fca52824248300fd042cbe1ac9e657"} Jan 23 16:57:06 crc kubenswrapper[4606]: I0123 16:57:06.200036 4606 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:06 crc kubenswrapper[4606]: I0123 16:57:06.200066 4606 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:06 crc kubenswrapper[4606]: I0123 16:57:06.200133 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:06 crc kubenswrapper[4606]: I0123 16:57:06.200156 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"69501d6062eaa0afe5ebfabc4849e082217f0235c3fc4e498429068e887b5bf2"} Jan 23 16:57:08 crc kubenswrapper[4606]: I0123 16:57:08.380458 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:08 crc kubenswrapper[4606]: I0123 16:57:08.381779 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:08 crc kubenswrapper[4606]: I0123 16:57:08.389336 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:08 crc kubenswrapper[4606]: I0123 16:57:08.774210 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:57:11 crc kubenswrapper[4606]: I0123 16:57:11.209456 4606 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:11 crc kubenswrapper[4606]: I0123 16:57:11.238620 4606 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:11 crc kubenswrapper[4606]: I0123 16:57:11.238650 4606 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:11 crc kubenswrapper[4606]: I0123 16:57:11.257196 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:11 crc kubenswrapper[4606]: I0123 16:57:11.385133 4606 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3f1e6fcc-0867-43fc-967a-91a0470e4139" Jan 23 16:57:12 crc kubenswrapper[4606]: I0123 16:57:12.242228 4606 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:12 crc kubenswrapper[4606]: I0123 16:57:12.242263 4606 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:12 crc kubenswrapper[4606]: I0123 16:57:12.245887 4606 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="3f1e6fcc-0867-43fc-967a-91a0470e4139" Jan 23 16:57:13 crc kubenswrapper[4606]: I0123 16:57:13.520547 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:57:13 crc kubenswrapper[4606]: I0123 16:57:13.526526 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 16:57:20 crc kubenswrapper[4606]: I0123 16:57:20.657695 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 16:57:20 crc kubenswrapper[4606]: I0123 16:57:20.816656 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.034322 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.041476 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.153554 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.191721 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.369483 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.605666 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.657763 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.859307 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 16:57:21 crc kubenswrapper[4606]: I0123 16:57:21.892403 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 16:57:22 crc kubenswrapper[4606]: I0123 16:57:22.160812 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:57:22 crc kubenswrapper[4606]: I0123 16:57:22.287020 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 16:57:22 crc kubenswrapper[4606]: I0123 16:57:22.413841 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 16:57:22 crc kubenswrapper[4606]: I0123 16:57:22.485177 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 16:57:22 crc kubenswrapper[4606]: I0123 16:57:22.573774 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 16:57:22 crc kubenswrapper[4606]: I0123 16:57:22.914762 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.135748 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.222412 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.232864 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.237727 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.396735 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.494947 4606 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.505773 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 16:57:23 crc kubenswrapper[4606]: I0123 16:57:23.948021 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.006189 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.047839 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.179449 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.287540 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.486632 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.678389 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.688352 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.721495 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.798824 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.862841 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.895502 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.911391 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 16:57:24 crc kubenswrapper[4606]: I0123 16:57:24.959026 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.011302 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.061840 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.099158 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.109554 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.159982 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.184120 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.293201 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.350844 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.390812 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.661285 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.815513 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.866430 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.867620 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 16:57:25 crc kubenswrapper[4606]: I0123 16:57:25.955658 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.038305 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.131853 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.135626 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.239251 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.244141 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.295097 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.392982 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.518493 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.585423 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.623242 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.626172 4606 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.626309 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.626275957 podStartE2EDuration="38.626275957s" podCreationTimestamp="2026-01-23 16:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:57:11.220820273 +0000 UTC m=+265.221012679" watchObservedRunningTime="2026-01-23 16:57:26.626275957 +0000 UTC m=+280.626468363" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630368 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vfv9l","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630429 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55c5844bb-5kpnm","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 16:57:26 crc kubenswrapper[4606]: E0123 16:57:26.630648 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778ea821-b161-4045-b9bc-78224119964a" containerName="installer" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630663 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="778ea821-b161-4045-b9bc-78224119964a" containerName="installer" Jan 23 16:57:26 crc kubenswrapper[4606]: E0123 16:57:26.630677 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" containerName="oauth-openshift" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630684 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" containerName="oauth-openshift" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630891 4606 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630908 4606 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5662d39c-4ee4-428e-8af8-9696d3a45d7d" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630912 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="778ea821-b161-4045-b9bc-78224119964a" containerName="installer" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.630948 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" containerName="oauth-openshift" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.631495 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.633443 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.635303 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.635346 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.635554 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.636255 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.636367 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.636483 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.638388 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.638836 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.638890 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.640017 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.640560 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.642836 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.655172 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.658047 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.664161 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.674609 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.685498 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.696357 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.696334714 podStartE2EDuration="15.696334714s" podCreationTimestamp="2026-01-23 16:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:57:26.660326704 +0000 UTC m=+280.660519130" watchObservedRunningTime="2026-01-23 16:57:26.696334714 +0000 UTC m=+280.696527120" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732274 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea9f8867-f115-44ee-a1ef-6a471c9a4439-audit-dir\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732320 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732408 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732462 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-error\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732493 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732531 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732554 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732569 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-audit-policies\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732605 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rcb5\" (UniqueName: \"kubernetes.io/projected/ea9f8867-f115-44ee-a1ef-6a471c9a4439-kube-api-access-8rcb5\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732621 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-session\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732740 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732790 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-login\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732824 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.732872 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.767893 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.829347 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.833922 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.833960 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-audit-policies\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.833980 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rcb5\" (UniqueName: \"kubernetes.io/projected/ea9f8867-f115-44ee-a1ef-6a471c9a4439-kube-api-access-8rcb5\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.833999 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-session\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834021 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834043 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-login\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834059 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834081 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834103 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea9f8867-f115-44ee-a1ef-6a471c9a4439-audit-dir\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834120 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834137 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834154 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-error\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834181 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834214 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834655 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.834813 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea9f8867-f115-44ee-a1ef-6a471c9a4439-audit-dir\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.835543 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.835624 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.836341 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea9f8867-f115-44ee-a1ef-6a471c9a4439-audit-policies\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.839317 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-error\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.839336 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.840472 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-session\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.840526 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.841682 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-login\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.842781 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.842836 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.850118 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea9f8867-f115-44ee-a1ef-6a471c9a4439-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.854248 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rcb5\" (UniqueName: \"kubernetes.io/projected/ea9f8867-f115-44ee-a1ef-6a471c9a4439-kube-api-access-8rcb5\") pod \"oauth-openshift-55c5844bb-5kpnm\" (UID: \"ea9f8867-f115-44ee-a1ef-6a471c9a4439\") " pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.902196 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.963729 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 16:57:26 crc kubenswrapper[4606]: I0123 16:57:26.968929 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.103485 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.284278 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.297197 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.314538 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.516206 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.573762 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.593717 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.679456 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.680374 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.684356 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.750936 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.758351 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.833428 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.852743 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.870769 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.888481 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.953713 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.963820 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 16:57:27 crc kubenswrapper[4606]: I0123 16:57:27.991345 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.031395 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.033459 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.033753 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.079798 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.091963 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.218756 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.222270 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.277030 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.372013 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.380289 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc495cd6-f31a-4b33-9c14-12b37c25f47c" path="/var/lib/kubelet/pods/dc495cd6-f31a-4b33-9c14-12b37c25f47c/volumes" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.401772 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.506872 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.560465 4606 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.617633 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 16:57:28 crc kubenswrapper[4606]: I0123 16:57:28.931027 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.007095 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.021915 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.065168 4606 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.125035 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.148235 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.201637 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.218984 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.245421 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.248320 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.280340 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.327059 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.356623 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.374283 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.389891 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.394871 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.441380 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.450520 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.639254 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.662429 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.717550 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.790323 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.859443 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.869647 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.880481 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.880971 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.991677 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 16:57:29 crc kubenswrapper[4606]: I0123 16:57:29.995877 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.023129 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.178172 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.206923 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.252000 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.263386 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.263417 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.328773 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.360265 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.568598 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.624174 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.747442 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.785134 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.801309 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.805487 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.837811 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.859488 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.910264 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.937894 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 16:57:30 crc kubenswrapper[4606]: I0123 16:57:30.963000 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.241874 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.279745 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.280285 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.290118 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.300296 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.301077 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.412257 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.414153 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.486488 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.535258 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.563736 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.610655 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.625366 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.634048 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.644582 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.686697 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.703157 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.822544 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 16:57:31 crc kubenswrapper[4606]: I0123 16:57:31.851833 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.080880 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.096256 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.116044 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.189927 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.243340 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.243777 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.262407 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.293411 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c5844bb-5kpnm"] Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.319236 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.346412 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.361242 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.462746 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.510493 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.542132 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.557048 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.606098 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c5844bb-5kpnm"] Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.632174 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.740892 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.816906 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.832089 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.884828 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.892558 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.932762 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 16:57:32 crc kubenswrapper[4606]: I0123 16:57:32.941193 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.122422 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.160344 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.183868 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.315443 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.358445 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" event={"ID":"ea9f8867-f115-44ee-a1ef-6a471c9a4439","Type":"ContainerStarted","Data":"8a9851cf1f0bf7985fab802f0c8fefca94867fbe487318f16ae449f9ee190aa3"} Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.358725 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" event={"ID":"ea9f8867-f115-44ee-a1ef-6a471c9a4439","Type":"ContainerStarted","Data":"105ba925b43ea1b3561eaec78e0b2d0d6d91a9e582ac24dbef7cc1f28a4cac8b"} Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.359080 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.391504 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" podStartSLOduration=58.391486478 podStartE2EDuration="58.391486478s" podCreationTimestamp="2026-01-23 16:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:57:33.387014699 +0000 UTC m=+287.387207115" watchObservedRunningTime="2026-01-23 16:57:33.391486478 +0000 UTC m=+287.391678884" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.400678 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.425648 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55c5844bb-5kpnm" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.537277 4606 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.537487 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e" gracePeriod=5 Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.647437 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.703663 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.710959 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.783900 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.924055 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 16:57:33 crc kubenswrapper[4606]: I0123 16:57:33.974223 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.040673 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.065615 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.132267 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.176613 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.186402 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.249618 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.379187 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.469124 4606 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.604107 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.654673 4606 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.691843 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.819554 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 16:57:34 crc kubenswrapper[4606]: I0123 16:57:34.990629 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.009897 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.018972 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.056287 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.205089 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.205450 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.253758 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.254485 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.371154 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.440247 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.650634 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.740323 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.875863 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 16:57:35 crc kubenswrapper[4606]: I0123 16:57:35.901474 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 16:57:36 crc kubenswrapper[4606]: I0123 16:57:36.272644 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 16:57:36 crc kubenswrapper[4606]: I0123 16:57:36.312326 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 16:57:36 crc kubenswrapper[4606]: I0123 16:57:36.431964 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 16:57:36 crc kubenswrapper[4606]: I0123 16:57:36.622684 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 16:57:37 crc kubenswrapper[4606]: I0123 16:57:37.042639 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 16:57:37 crc kubenswrapper[4606]: I0123 16:57:37.107832 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 16:57:37 crc kubenswrapper[4606]: I0123 16:57:37.222804 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 16:57:37 crc kubenswrapper[4606]: I0123 16:57:37.367540 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 16:57:38 crc kubenswrapper[4606]: I0123 16:57:38.010696 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.158134 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.158219 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298042 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298130 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298204 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298264 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298285 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298534 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298550 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298669 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.298817 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.308195 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.400274 4606 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.400350 4606 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.400366 4606 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.400375 4606 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.400384 4606 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.405894 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.406299 4606 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e" exitCode=137 Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.406430 4606 scope.go:117] "RemoveContainer" containerID="7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.406488 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.423931 4606 scope.go:117] "RemoveContainer" containerID="7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e" Jan 23 16:57:39 crc kubenswrapper[4606]: E0123 16:57:39.424443 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e\": container with ID starting with 7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e not found: ID does not exist" containerID="7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e" Jan 23 16:57:39 crc kubenswrapper[4606]: I0123 16:57:39.424553 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e"} err="failed to get container status \"7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e\": rpc error: code = NotFound desc = could not find container \"7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e\": container with ID starting with 7153b0988faa7c224bf58d1892496d348a42e6abe480b5a8d50ccd576d8b080e not found: ID does not exist" Jan 23 16:57:40 crc kubenswrapper[4606]: I0123 16:57:40.377429 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 16:57:40 crc kubenswrapper[4606]: I0123 16:57:40.378078 4606 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 23 16:57:40 crc kubenswrapper[4606]: I0123 16:57:40.395858 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:57:40 crc kubenswrapper[4606]: I0123 16:57:40.395927 4606 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="10bf5c01-7202-4093-a5e9-f78e29967ce1" Jan 23 16:57:40 crc kubenswrapper[4606]: I0123 16:57:40.402765 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 16:57:40 crc kubenswrapper[4606]: I0123 16:57:40.402836 4606 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="10bf5c01-7202-4093-a5e9-f78e29967ce1" Jan 23 16:57:46 crc kubenswrapper[4606]: I0123 16:57:46.216277 4606 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 16:57:51 crc kubenswrapper[4606]: I0123 16:57:51.180339 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 16:57:56 crc kubenswrapper[4606]: I0123 16:57:56.450337 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ddvp8"] Jan 23 16:57:56 crc kubenswrapper[4606]: I0123 16:57:56.451161 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" podUID="d53dfb30-7e32-472e-8602-16c28f0a2b03" containerName="controller-manager" containerID="cri-o://d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a" gracePeriod=30 Jan 23 16:57:56 crc kubenswrapper[4606]: I0123 16:57:56.547012 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5"] Jan 23 16:57:56 crc kubenswrapper[4606]: I0123 16:57:56.547493 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" podUID="2fa086b5-8528-4fa7-a649-1722d7e8cdf5" containerName="route-controller-manager" containerID="cri-o://9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0" gracePeriod=30 Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:56.972100 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:56.980224 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123286 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-client-ca\") pod \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123336 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-config\") pod \"d53dfb30-7e32-472e-8602-16c28f0a2b03\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123389 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrxbc\" (UniqueName: \"kubernetes.io/projected/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-kube-api-access-rrxbc\") pod \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123465 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxkz5\" (UniqueName: \"kubernetes.io/projected/d53dfb30-7e32-472e-8602-16c28f0a2b03-kube-api-access-fxkz5\") pod \"d53dfb30-7e32-472e-8602-16c28f0a2b03\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123786 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-proxy-ca-bundles\") pod \"d53dfb30-7e32-472e-8602-16c28f0a2b03\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123825 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53dfb30-7e32-472e-8602-16c28f0a2b03-serving-cert\") pod \"d53dfb30-7e32-472e-8602-16c28f0a2b03\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123885 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-config\") pod \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123936 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-client-ca\") pod \"d53dfb30-7e32-472e-8602-16c28f0a2b03\" (UID: \"d53dfb30-7e32-472e-8602-16c28f0a2b03\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.123963 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-serving-cert\") pod \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\" (UID: \"2fa086b5-8528-4fa7-a649-1722d7e8cdf5\") " Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.124325 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-client-ca" (OuterVolumeSpecName: "client-ca") pod "2fa086b5-8528-4fa7-a649-1722d7e8cdf5" (UID: "2fa086b5-8528-4fa7-a649-1722d7e8cdf5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.124554 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d53dfb30-7e32-472e-8602-16c28f0a2b03" (UID: "d53dfb30-7e32-472e-8602-16c28f0a2b03"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.124772 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-client-ca" (OuterVolumeSpecName: "client-ca") pod "d53dfb30-7e32-472e-8602-16c28f0a2b03" (UID: "d53dfb30-7e32-472e-8602-16c28f0a2b03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.124810 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-config" (OuterVolumeSpecName: "config") pod "2fa086b5-8528-4fa7-a649-1722d7e8cdf5" (UID: "2fa086b5-8528-4fa7-a649-1722d7e8cdf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.125092 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-config" (OuterVolumeSpecName: "config") pod "d53dfb30-7e32-472e-8602-16c28f0a2b03" (UID: "d53dfb30-7e32-472e-8602-16c28f0a2b03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.129790 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2fa086b5-8528-4fa7-a649-1722d7e8cdf5" (UID: "2fa086b5-8528-4fa7-a649-1722d7e8cdf5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.130940 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53dfb30-7e32-472e-8602-16c28f0a2b03-kube-api-access-fxkz5" (OuterVolumeSpecName: "kube-api-access-fxkz5") pod "d53dfb30-7e32-472e-8602-16c28f0a2b03" (UID: "d53dfb30-7e32-472e-8602-16c28f0a2b03"). InnerVolumeSpecName "kube-api-access-fxkz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.131074 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53dfb30-7e32-472e-8602-16c28f0a2b03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d53dfb30-7e32-472e-8602-16c28f0a2b03" (UID: "d53dfb30-7e32-472e-8602-16c28f0a2b03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.133765 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-kube-api-access-rrxbc" (OuterVolumeSpecName: "kube-api-access-rrxbc") pod "2fa086b5-8528-4fa7-a649-1722d7e8cdf5" (UID: "2fa086b5-8528-4fa7-a649-1722d7e8cdf5"). InnerVolumeSpecName "kube-api-access-rrxbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224915 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxkz5\" (UniqueName: \"kubernetes.io/projected/d53dfb30-7e32-472e-8602-16c28f0a2b03-kube-api-access-fxkz5\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224945 4606 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224954 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53dfb30-7e32-472e-8602-16c28f0a2b03-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224964 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224972 4606 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224980 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224987 4606 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.224996 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53dfb30-7e32-472e-8602-16c28f0a2b03-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.225004 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrxbc\" (UniqueName: \"kubernetes.io/projected/2fa086b5-8528-4fa7-a649-1722d7e8cdf5-kube-api-access-rrxbc\") on node \"crc\" DevicePath \"\"" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.521572 4606 generic.go:334] "Generic (PLEG): container finished" podID="d53dfb30-7e32-472e-8602-16c28f0a2b03" containerID="d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a" exitCode=0 Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.521619 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" event={"ID":"d53dfb30-7e32-472e-8602-16c28f0a2b03","Type":"ContainerDied","Data":"d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a"} Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.521674 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" event={"ID":"d53dfb30-7e32-472e-8602-16c28f0a2b03","Type":"ContainerDied","Data":"aa0daf2c6da81647c15ed6ea14dbef6fe3cb8102f053f1d2c482e1a5337764c3"} Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.521698 4606 scope.go:117] "RemoveContainer" containerID="d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.521719 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ddvp8" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.524841 4606 generic.go:334] "Generic (PLEG): container finished" podID="2fa086b5-8528-4fa7-a649-1722d7e8cdf5" containerID="9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0" exitCode=0 Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.524900 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.524900 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" event={"ID":"2fa086b5-8528-4fa7-a649-1722d7e8cdf5","Type":"ContainerDied","Data":"9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0"} Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.525030 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5" event={"ID":"2fa086b5-8528-4fa7-a649-1722d7e8cdf5","Type":"ContainerDied","Data":"e8f2c8b8074361328ee980b72220b5ebf3b5cc8f8cb9e13894a8714edbb94209"} Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.548419 4606 scope.go:117] "RemoveContainer" containerID="d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a" Jan 23 16:57:57 crc kubenswrapper[4606]: E0123 16:57:57.552751 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a\": container with ID starting with d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a not found: ID does not exist" containerID="d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.552800 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a"} err="failed to get container status \"d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a\": rpc error: code = NotFound desc = could not find container \"d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a\": container with ID starting with d89acd5dfdb34949647b599af008f01f3a8bb283148deaeba44875a16343158a not found: ID does not exist" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.552825 4606 scope.go:117] "RemoveContainer" containerID="9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.570119 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5"] Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.573596 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rqqd5"] Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.581382 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ddvp8"] Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.588181 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ddvp8"] Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.602026 4606 scope.go:117] "RemoveContainer" containerID="9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0" Jan 23 16:57:57 crc kubenswrapper[4606]: E0123 16:57:57.602530 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0\": container with ID starting with 9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0 not found: ID does not exist" containerID="9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0" Jan 23 16:57:57 crc kubenswrapper[4606]: I0123 16:57:57.602571 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0"} err="failed to get container status \"9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0\": rpc error: code = NotFound desc = could not find container \"9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0\": container with ID starting with 9d22bf3677748a372ab87838b86c4f85552537fe3280a925b801571f373af5f0 not found: ID does not exist" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.321778 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4"] Jan 23 16:57:58 crc kubenswrapper[4606]: E0123 16:57:58.322525 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53dfb30-7e32-472e-8602-16c28f0a2b03" containerName="controller-manager" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.322555 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53dfb30-7e32-472e-8602-16c28f0a2b03" containerName="controller-manager" Jan 23 16:57:58 crc kubenswrapper[4606]: E0123 16:57:58.322575 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.322608 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 16:57:58 crc kubenswrapper[4606]: E0123 16:57:58.322626 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa086b5-8528-4fa7-a649-1722d7e8cdf5" containerName="route-controller-manager" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.322638 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa086b5-8528-4fa7-a649-1722d7e8cdf5" containerName="route-controller-manager" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.322858 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53dfb30-7e32-472e-8602-16c28f0a2b03" containerName="controller-manager" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.322889 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa086b5-8528-4fa7-a649-1722d7e8cdf5" containerName="route-controller-manager" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.322906 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.323555 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.325177 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64798f646d-zr8l4"] Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.325730 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.325930 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.326167 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.326387 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.326570 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.326774 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.327016 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.329836 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.330062 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.330252 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.330489 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.330616 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.330705 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.337898 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.338711 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-config\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.338785 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcgx\" (UniqueName: \"kubernetes.io/projected/89866917-fce2-444c-8cb8-995fac269c68-kube-api-access-8pcgx\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.338820 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49w6q\" (UniqueName: \"kubernetes.io/projected/5fa74e1f-8662-49be-960e-9e41e648a3b7-kube-api-access-49w6q\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.338857 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-client-ca\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.338882 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-proxy-ca-bundles\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.338912 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-config\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.338952 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89866917-fce2-444c-8cb8-995fac269c68-serving-cert\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.339000 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fa74e1f-8662-49be-960e-9e41e648a3b7-serving-cert\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.339030 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-client-ca\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.340902 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64798f646d-zr8l4"] Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.361872 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4"] Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.398741 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa086b5-8528-4fa7-a649-1722d7e8cdf5" path="/var/lib/kubelet/pods/2fa086b5-8528-4fa7-a649-1722d7e8cdf5/volumes" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.408264 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d53dfb30-7e32-472e-8602-16c28f0a2b03" path="/var/lib/kubelet/pods/d53dfb30-7e32-472e-8602-16c28f0a2b03/volumes" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.439857 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-client-ca\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.439940 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-config\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.439979 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pcgx\" (UniqueName: \"kubernetes.io/projected/89866917-fce2-444c-8cb8-995fac269c68-kube-api-access-8pcgx\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.440298 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49w6q\" (UniqueName: \"kubernetes.io/projected/5fa74e1f-8662-49be-960e-9e41e648a3b7-kube-api-access-49w6q\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.440347 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-client-ca\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.440372 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-proxy-ca-bundles\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.440389 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-config\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.440460 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89866917-fce2-444c-8cb8-995fac269c68-serving-cert\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.440529 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fa74e1f-8662-49be-960e-9e41e648a3b7-serving-cert\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.441310 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-client-ca\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.441892 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-config\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.442309 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-config\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.442534 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fa74e1f-8662-49be-960e-9e41e648a3b7-proxy-ca-bundles\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.451849 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-client-ca\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.453062 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89866917-fce2-444c-8cb8-995fac269c68-serving-cert\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.453172 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fa74e1f-8662-49be-960e-9e41e648a3b7-serving-cert\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.455527 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pcgx\" (UniqueName: \"kubernetes.io/projected/89866917-fce2-444c-8cb8-995fac269c68-kube-api-access-8pcgx\") pod \"route-controller-manager-775fdc88b6-jx8w4\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.462141 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49w6q\" (UniqueName: \"kubernetes.io/projected/5fa74e1f-8662-49be-960e-9e41e648a3b7-kube-api-access-49w6q\") pod \"controller-manager-64798f646d-zr8l4\" (UID: \"5fa74e1f-8662-49be-960e-9e41e648a3b7\") " pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.662905 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.680327 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:58 crc kubenswrapper[4606]: I0123 16:57:58.878251 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4"] Jan 23 16:57:58 crc kubenswrapper[4606]: W0123 16:57:58.884724 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89866917_fce2_444c_8cb8_995fac269c68.slice/crio-d42ac3577b05d2d71a94eac83751afcbd5070588a4690430ddb760d8c333100e WatchSource:0}: Error finding container d42ac3577b05d2d71a94eac83751afcbd5070588a4690430ddb760d8c333100e: Status 404 returned error can't find the container with id d42ac3577b05d2d71a94eac83751afcbd5070588a4690430ddb760d8c333100e Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.128488 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64798f646d-zr8l4"] Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.536859 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" event={"ID":"5fa74e1f-8662-49be-960e-9e41e648a3b7","Type":"ContainerStarted","Data":"a26bd011f5d503425a6ea5b8f0b50c1232710faa09a74a879fd161b38957bbb9"} Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.537650 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" event={"ID":"5fa74e1f-8662-49be-960e-9e41e648a3b7","Type":"ContainerStarted","Data":"f74520e5f265d1757b59a2bd88c8fbf037a5796db70911cb33a599866641d580"} Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.539613 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.541357 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" event={"ID":"89866917-fce2-444c-8cb8-995fac269c68","Type":"ContainerStarted","Data":"bf1edb7b686fe59940e5ae87af788d05b3c54f94bdf18560d65f42919e4bf290"} Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.541408 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" event={"ID":"89866917-fce2-444c-8cb8-995fac269c68","Type":"ContainerStarted","Data":"d42ac3577b05d2d71a94eac83751afcbd5070588a4690430ddb760d8c333100e"} Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.541797 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.543480 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.649707 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.659501 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" podStartSLOduration=3.659484047 podStartE2EDuration="3.659484047s" podCreationTimestamp="2026-01-23 16:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:57:59.569858818 +0000 UTC m=+313.570051224" watchObservedRunningTime="2026-01-23 16:57:59.659484047 +0000 UTC m=+313.659676453" Jan 23 16:57:59 crc kubenswrapper[4606]: I0123 16:57:59.661122 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" podStartSLOduration=3.66111475 podStartE2EDuration="3.66111475s" podCreationTimestamp="2026-01-23 16:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:57:59.656541498 +0000 UTC m=+313.656733924" watchObservedRunningTime="2026-01-23 16:57:59.66111475 +0000 UTC m=+313.661307146" Jan 23 16:58:05 crc kubenswrapper[4606]: I0123 16:58:05.614332 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 16:58:17 crc kubenswrapper[4606]: I0123 16:58:17.935009 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mb5ws"] Jan 23 16:58:17 crc kubenswrapper[4606]: I0123 16:58:17.943963 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:17 crc kubenswrapper[4606]: I0123 16:58:17.952494 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mb5ws"] Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060057 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b07e3db9-bf03-44e6-9b8e-c193b0142931-trusted-ca\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060116 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-bound-sa-token\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060167 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b07e3db9-bf03-44e6-9b8e-c193b0142931-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060189 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-registry-tls\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060284 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvt8h\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-kube-api-access-hvt8h\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060386 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b07e3db9-bf03-44e6-9b8e-c193b0142931-registry-certificates\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060412 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b07e3db9-bf03-44e6-9b8e-c193b0142931-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.060462 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.081340 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.161723 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b07e3db9-bf03-44e6-9b8e-c193b0142931-trusted-ca\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.161770 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-bound-sa-token\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.161828 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b07e3db9-bf03-44e6-9b8e-c193b0142931-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.161852 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-registry-tls\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.161890 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvt8h\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-kube-api-access-hvt8h\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.161923 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b07e3db9-bf03-44e6-9b8e-c193b0142931-registry-certificates\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.161945 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b07e3db9-bf03-44e6-9b8e-c193b0142931-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.162825 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b07e3db9-bf03-44e6-9b8e-c193b0142931-trusted-ca\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.163173 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b07e3db9-bf03-44e6-9b8e-c193b0142931-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.164135 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b07e3db9-bf03-44e6-9b8e-c193b0142931-registry-certificates\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.167842 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b07e3db9-bf03-44e6-9b8e-c193b0142931-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.168226 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-registry-tls\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.185941 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-bound-sa-token\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.186198 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvt8h\" (UniqueName: \"kubernetes.io/projected/b07e3db9-bf03-44e6-9b8e-c193b0142931-kube-api-access-hvt8h\") pod \"image-registry-66df7c8f76-mb5ws\" (UID: \"b07e3db9-bf03-44e6-9b8e-c193b0142931\") " pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.266965 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:18 crc kubenswrapper[4606]: I0123 16:58:18.716942 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mb5ws"] Jan 23 16:58:19 crc kubenswrapper[4606]: I0123 16:58:19.640856 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" event={"ID":"b07e3db9-bf03-44e6-9b8e-c193b0142931","Type":"ContainerStarted","Data":"f96aeb6d744528151f871d67a949597f625dcbdfbffc8194bb691a7464cd77e7"} Jan 23 16:58:19 crc kubenswrapper[4606]: I0123 16:58:19.641806 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" event={"ID":"b07e3db9-bf03-44e6-9b8e-c193b0142931","Type":"ContainerStarted","Data":"0bb54431851be2e94195275d1e619a40b240149fd8ce4bf7046486df32fef1f2"} Jan 23 16:58:19 crc kubenswrapper[4606]: I0123 16:58:19.642652 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.441784 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" podStartSLOduration=19.441768151 podStartE2EDuration="19.441768151s" podCreationTimestamp="2026-01-23 16:58:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:58:19.664965894 +0000 UTC m=+333.665158310" watchObservedRunningTime="2026-01-23 16:58:36.441768151 +0000 UTC m=+350.441960547" Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.444120 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4"] Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.444329 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" podUID="89866917-fce2-444c-8cb8-995fac269c68" containerName="route-controller-manager" containerID="cri-o://bf1edb7b686fe59940e5ae87af788d05b3c54f94bdf18560d65f42919e4bf290" gracePeriod=30 Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.737794 4606 generic.go:334] "Generic (PLEG): container finished" podID="89866917-fce2-444c-8cb8-995fac269c68" containerID="bf1edb7b686fe59940e5ae87af788d05b3c54f94bdf18560d65f42919e4bf290" exitCode=0 Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.737902 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" event={"ID":"89866917-fce2-444c-8cb8-995fac269c68","Type":"ContainerDied","Data":"bf1edb7b686fe59940e5ae87af788d05b3c54f94bdf18560d65f42919e4bf290"} Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.784391 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.917326 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pcgx\" (UniqueName: \"kubernetes.io/projected/89866917-fce2-444c-8cb8-995fac269c68-kube-api-access-8pcgx\") pod \"89866917-fce2-444c-8cb8-995fac269c68\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.917386 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-client-ca\") pod \"89866917-fce2-444c-8cb8-995fac269c68\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.917460 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-config\") pod \"89866917-fce2-444c-8cb8-995fac269c68\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.917491 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89866917-fce2-444c-8cb8-995fac269c68-serving-cert\") pod \"89866917-fce2-444c-8cb8-995fac269c68\" (UID: \"89866917-fce2-444c-8cb8-995fac269c68\") " Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.918640 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-client-ca" (OuterVolumeSpecName: "client-ca") pod "89866917-fce2-444c-8cb8-995fac269c68" (UID: "89866917-fce2-444c-8cb8-995fac269c68"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.918766 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-config" (OuterVolumeSpecName: "config") pod "89866917-fce2-444c-8cb8-995fac269c68" (UID: "89866917-fce2-444c-8cb8-995fac269c68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.923719 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89866917-fce2-444c-8cb8-995fac269c68-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89866917-fce2-444c-8cb8-995fac269c68" (UID: "89866917-fce2-444c-8cb8-995fac269c68"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:58:36 crc kubenswrapper[4606]: I0123 16:58:36.925774 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89866917-fce2-444c-8cb8-995fac269c68-kube-api-access-8pcgx" (OuterVolumeSpecName: "kube-api-access-8pcgx") pod "89866917-fce2-444c-8cb8-995fac269c68" (UID: "89866917-fce2-444c-8cb8-995fac269c68"). InnerVolumeSpecName "kube-api-access-8pcgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.019466 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pcgx\" (UniqueName: \"kubernetes.io/projected/89866917-fce2-444c-8cb8-995fac269c68-kube-api-access-8pcgx\") on node \"crc\" DevicePath \"\"" Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.019509 4606 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.019521 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89866917-fce2-444c-8cb8-995fac269c68-config\") on node \"crc\" DevicePath \"\"" Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.019552 4606 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89866917-fce2-444c-8cb8-995fac269c68-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.744525 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" event={"ID":"89866917-fce2-444c-8cb8-995fac269c68","Type":"ContainerDied","Data":"d42ac3577b05d2d71a94eac83751afcbd5070588a4690430ddb760d8c333100e"} Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.744605 4606 scope.go:117] "RemoveContainer" containerID="bf1edb7b686fe59940e5ae87af788d05b3c54f94bdf18560d65f42919e4bf290" Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.744608 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4" Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.787737 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4"] Jan 23 16:58:37 crc kubenswrapper[4606]: I0123 16:58:37.799000 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-775fdc88b6-jx8w4"] Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.272071 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mb5ws" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.323798 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2bspd"] Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.355497 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88"] Jan 23 16:58:38 crc kubenswrapper[4606]: E0123 16:58:38.355700 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89866917-fce2-444c-8cb8-995fac269c68" containerName="route-controller-manager" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.355713 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="89866917-fce2-444c-8cb8-995fac269c68" containerName="route-controller-manager" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.355813 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="89866917-fce2-444c-8cb8-995fac269c68" containerName="route-controller-manager" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.356221 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.364566 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.364831 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.364983 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.365214 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.365393 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.365902 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.388655 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89866917-fce2-444c-8cb8-995fac269c68" path="/var/lib/kubelet/pods/89866917-fce2-444c-8cb8-995fac269c68/volumes" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.389836 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88"] Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.538534 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24c512e7-14f3-41df-b8de-201736780cfb-client-ca\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.538579 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls5nx\" (UniqueName: \"kubernetes.io/projected/24c512e7-14f3-41df-b8de-201736780cfb-kube-api-access-ls5nx\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.538623 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24c512e7-14f3-41df-b8de-201736780cfb-serving-cert\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.538862 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24c512e7-14f3-41df-b8de-201736780cfb-config\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.639897 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24c512e7-14f3-41df-b8de-201736780cfb-config\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.640003 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24c512e7-14f3-41df-b8de-201736780cfb-client-ca\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.640022 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls5nx\" (UniqueName: \"kubernetes.io/projected/24c512e7-14f3-41df-b8de-201736780cfb-kube-api-access-ls5nx\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.640041 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24c512e7-14f3-41df-b8de-201736780cfb-serving-cert\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.641400 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24c512e7-14f3-41df-b8de-201736780cfb-config\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.641447 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24c512e7-14f3-41df-b8de-201736780cfb-client-ca\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.655809 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24c512e7-14f3-41df-b8de-201736780cfb-serving-cert\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.663417 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls5nx\" (UniqueName: \"kubernetes.io/projected/24c512e7-14f3-41df-b8de-201736780cfb-kube-api-access-ls5nx\") pod \"route-controller-manager-59cb4b7958-wnt88\" (UID: \"24c512e7-14f3-41df-b8de-201736780cfb\") " pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.693798 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:38 crc kubenswrapper[4606]: I0123 16:58:38.913469 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88"] Jan 23 16:58:39 crc kubenswrapper[4606]: I0123 16:58:39.757403 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" event={"ID":"24c512e7-14f3-41df-b8de-201736780cfb","Type":"ContainerStarted","Data":"d26551334d1418fdeeba51f5fd0ed3630072e3db5d8b0d5de389d85a7a4cadf8"} Jan 23 16:58:39 crc kubenswrapper[4606]: I0123 16:58:39.757460 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" event={"ID":"24c512e7-14f3-41df-b8de-201736780cfb","Type":"ContainerStarted","Data":"f3e10acd02c279f464abba9f462de2abb3a9297c1ff58838c593646cc5216a8f"} Jan 23 16:58:39 crc kubenswrapper[4606]: I0123 16:58:39.757792 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:39 crc kubenswrapper[4606]: I0123 16:58:39.763105 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" Jan 23 16:58:39 crc kubenswrapper[4606]: I0123 16:58:39.779027 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" podStartSLOduration=3.779010057 podStartE2EDuration="3.779010057s" podCreationTimestamp="2026-01-23 16:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 16:58:39.774227979 +0000 UTC m=+353.774420405" watchObservedRunningTime="2026-01-23 16:58:39.779010057 +0000 UTC m=+353.779202453" Jan 23 16:58:59 crc kubenswrapper[4606]: I0123 16:58:59.934318 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:58:59 crc kubenswrapper[4606]: I0123 16:58:59.934976 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.376546 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" podUID="5ec26ef9-e655-476f-ac27-f27942209b6f" containerName="registry" containerID="cri-o://c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f" gracePeriod=30 Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.760326 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.883466 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-bound-sa-token\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.883764 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzphp\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-kube-api-access-rzphp\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.883812 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ec26ef9-e655-476f-ac27-f27942209b6f-installation-pull-secrets\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.883840 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-tls\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.883865 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ec26ef9-e655-476f-ac27-f27942209b6f-ca-trust-extracted\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.883889 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-certificates\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.884045 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.884067 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-trusted-ca\") pod \"5ec26ef9-e655-476f-ac27-f27942209b6f\" (UID: \"5ec26ef9-e655-476f-ac27-f27942209b6f\") " Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.884923 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.885048 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.890923 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-kube-api-access-rzphp" (OuterVolumeSpecName: "kube-api-access-rzphp") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "kube-api-access-rzphp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.890928 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec26ef9-e655-476f-ac27-f27942209b6f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.891763 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.892045 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.894225 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.909045 4606 generic.go:334] "Generic (PLEG): container finished" podID="5ec26ef9-e655-476f-ac27-f27942209b6f" containerID="c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f" exitCode=0 Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.909142 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" event={"ID":"5ec26ef9-e655-476f-ac27-f27942209b6f","Type":"ContainerDied","Data":"c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f"} Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.909497 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" event={"ID":"5ec26ef9-e655-476f-ac27-f27942209b6f","Type":"ContainerDied","Data":"eea67d66f1ad1e209b6c42ae39998a7c568936e62c91747e1396a27be04ae850"} Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.909527 4606 scope.go:117] "RemoveContainer" containerID="c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.909199 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2bspd" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.913765 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec26ef9-e655-476f-ac27-f27942209b6f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5ec26ef9-e655-476f-ac27-f27942209b6f" (UID: "5ec26ef9-e655-476f-ac27-f27942209b6f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.929007 4606 scope.go:117] "RemoveContainer" containerID="c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f" Jan 23 16:59:03 crc kubenswrapper[4606]: E0123 16:59:03.929470 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f\": container with ID starting with c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f not found: ID does not exist" containerID="c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.929520 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f"} err="failed to get container status \"c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f\": rpc error: code = NotFound desc = could not find container \"c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f\": container with ID starting with c4618ff46b5e2639c43c1b846fce3451fe41c5e9ecbc6c6662eac8e575b4332f not found: ID does not exist" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.984970 4606 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.985017 4606 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5ec26ef9-e655-476f-ac27-f27942209b6f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.985030 4606 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.985043 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec26ef9-e655-476f-ac27-f27942209b6f-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.985054 4606 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.985066 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzphp\" (UniqueName: \"kubernetes.io/projected/5ec26ef9-e655-476f-ac27-f27942209b6f-kube-api-access-rzphp\") on node \"crc\" DevicePath \"\"" Jan 23 16:59:03 crc kubenswrapper[4606]: I0123 16:59:03.985077 4606 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5ec26ef9-e655-476f-ac27-f27942209b6f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 16:59:04 crc kubenswrapper[4606]: I0123 16:59:04.241338 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2bspd"] Jan 23 16:59:04 crc kubenswrapper[4606]: I0123 16:59:04.246499 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2bspd"] Jan 23 16:59:04 crc kubenswrapper[4606]: I0123 16:59:04.373546 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec26ef9-e655-476f-ac27-f27942209b6f" path="/var/lib/kubelet/pods/5ec26ef9-e655-476f-ac27-f27942209b6f/volumes" Jan 23 16:59:29 crc kubenswrapper[4606]: I0123 16:59:29.934501 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:59:29 crc kubenswrapper[4606]: I0123 16:59:29.935708 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:59:59 crc kubenswrapper[4606]: I0123 16:59:59.934301 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 16:59:59 crc kubenswrapper[4606]: I0123 16:59:59.934923 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 16:59:59 crc kubenswrapper[4606]: I0123 16:59:59.934982 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 16:59:59 crc kubenswrapper[4606]: I0123 16:59:59.935656 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cf4b07e2ad3950b42f8b972aeb9bf7966a5e6fa914bd47d93edb65756e33cc8"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 16:59:59 crc kubenswrapper[4606]: I0123 16:59:59.935722 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://8cf4b07e2ad3950b42f8b972aeb9bf7966a5e6fa914bd47d93edb65756e33cc8" gracePeriod=600 Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.161833 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw"] Jan 23 17:00:00 crc kubenswrapper[4606]: E0123 17:00:00.162522 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec26ef9-e655-476f-ac27-f27942209b6f" containerName="registry" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.162537 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec26ef9-e655-476f-ac27-f27942209b6f" containerName="registry" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.162653 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec26ef9-e655-476f-ac27-f27942209b6f" containerName="registry" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.163097 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.165522 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.165879 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.178535 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw"] Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.259794 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="8cf4b07e2ad3950b42f8b972aeb9bf7966a5e6fa914bd47d93edb65756e33cc8" exitCode=0 Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.259838 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"8cf4b07e2ad3950b42f8b972aeb9bf7966a5e6fa914bd47d93edb65756e33cc8"} Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.259864 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"6bf772329cdfa9ebae83dfcb79df9cfd8a68a9a3c277497c6d856cec3d4e5cf8"} Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.259882 4606 scope.go:117] "RemoveContainer" containerID="17386f64cac9cbbbed0c8f887e81839674e48a2c1d7ec4e5c07200411e3a3e03" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.294037 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr5xx\" (UniqueName: \"kubernetes.io/projected/65917137-b412-4309-97ac-899450da05f4-kube-api-access-mr5xx\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.294197 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65917137-b412-4309-97ac-899450da05f4-secret-volume\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.294228 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65917137-b412-4309-97ac-899450da05f4-config-volume\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.395477 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr5xx\" (UniqueName: \"kubernetes.io/projected/65917137-b412-4309-97ac-899450da05f4-kube-api-access-mr5xx\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.395536 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65917137-b412-4309-97ac-899450da05f4-secret-volume\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.395560 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65917137-b412-4309-97ac-899450da05f4-config-volume\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.396379 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65917137-b412-4309-97ac-899450da05f4-config-volume\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.400606 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65917137-b412-4309-97ac-899450da05f4-secret-volume\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.414906 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr5xx\" (UniqueName: \"kubernetes.io/projected/65917137-b412-4309-97ac-899450da05f4-kube-api-access-mr5xx\") pod \"collect-profiles-29486460-vmbzw\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.483542 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:00 crc kubenswrapper[4606]: I0123 17:00:00.913882 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw"] Jan 23 17:00:00 crc kubenswrapper[4606]: W0123 17:00:00.918987 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65917137_b412_4309_97ac_899450da05f4.slice/crio-59e0d5e349913e5322e3c7fb31bfad59a3930fbde2d649778053aaf9966190b9 WatchSource:0}: Error finding container 59e0d5e349913e5322e3c7fb31bfad59a3930fbde2d649778053aaf9966190b9: Status 404 returned error can't find the container with id 59e0d5e349913e5322e3c7fb31bfad59a3930fbde2d649778053aaf9966190b9 Jan 23 17:00:01 crc kubenswrapper[4606]: I0123 17:00:01.266333 4606 generic.go:334] "Generic (PLEG): container finished" podID="65917137-b412-4309-97ac-899450da05f4" containerID="e8adc24d9125a3d67c711a647589a5eb3df40d11cd4dd0afe988d823e0dabe96" exitCode=0 Jan 23 17:00:01 crc kubenswrapper[4606]: I0123 17:00:01.266425 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" event={"ID":"65917137-b412-4309-97ac-899450da05f4","Type":"ContainerDied","Data":"e8adc24d9125a3d67c711a647589a5eb3df40d11cd4dd0afe988d823e0dabe96"} Jan 23 17:00:01 crc kubenswrapper[4606]: I0123 17:00:01.266708 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" event={"ID":"65917137-b412-4309-97ac-899450da05f4","Type":"ContainerStarted","Data":"59e0d5e349913e5322e3c7fb31bfad59a3930fbde2d649778053aaf9966190b9"} Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.507923 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.624873 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr5xx\" (UniqueName: \"kubernetes.io/projected/65917137-b412-4309-97ac-899450da05f4-kube-api-access-mr5xx\") pod \"65917137-b412-4309-97ac-899450da05f4\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.625315 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65917137-b412-4309-97ac-899450da05f4-config-volume\") pod \"65917137-b412-4309-97ac-899450da05f4\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.625545 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65917137-b412-4309-97ac-899450da05f4-secret-volume\") pod \"65917137-b412-4309-97ac-899450da05f4\" (UID: \"65917137-b412-4309-97ac-899450da05f4\") " Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.626167 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65917137-b412-4309-97ac-899450da05f4-config-volume" (OuterVolumeSpecName: "config-volume") pod "65917137-b412-4309-97ac-899450da05f4" (UID: "65917137-b412-4309-97ac-899450da05f4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.630879 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65917137-b412-4309-97ac-899450da05f4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65917137-b412-4309-97ac-899450da05f4" (UID: "65917137-b412-4309-97ac-899450da05f4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.631878 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65917137-b412-4309-97ac-899450da05f4-kube-api-access-mr5xx" (OuterVolumeSpecName: "kube-api-access-mr5xx") pod "65917137-b412-4309-97ac-899450da05f4" (UID: "65917137-b412-4309-97ac-899450da05f4"). InnerVolumeSpecName "kube-api-access-mr5xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.727516 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65917137-b412-4309-97ac-899450da05f4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.727561 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr5xx\" (UniqueName: \"kubernetes.io/projected/65917137-b412-4309-97ac-899450da05f4-kube-api-access-mr5xx\") on node \"crc\" DevicePath \"\"" Jan 23 17:00:02 crc kubenswrapper[4606]: I0123 17:00:02.727573 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65917137-b412-4309-97ac-899450da05f4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:00:03 crc kubenswrapper[4606]: I0123 17:00:03.283349 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" event={"ID":"65917137-b412-4309-97ac-899450da05f4","Type":"ContainerDied","Data":"59e0d5e349913e5322e3c7fb31bfad59a3930fbde2d649778053aaf9966190b9"} Jan 23 17:00:03 crc kubenswrapper[4606]: I0123 17:00:03.283792 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59e0d5e349913e5322e3c7fb31bfad59a3930fbde2d649778053aaf9966190b9" Jan 23 17:00:03 crc kubenswrapper[4606]: I0123 17:00:03.283446 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.873183 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d"] Jan 23 17:02:00 crc kubenswrapper[4606]: E0123 17:02:00.874621 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65917137-b412-4309-97ac-899450da05f4" containerName="collect-profiles" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.874642 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="65917137-b412-4309-97ac-899450da05f4" containerName="collect-profiles" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.874758 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="65917137-b412-4309-97ac-899450da05f4" containerName="collect-profiles" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.875370 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.877754 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.878073 4606 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hwpd5" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.881289 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.909472 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-69xjr"] Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.910499 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-69xjr" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.919372 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z99zq"] Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.920312 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.921609 4606 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-v8dvx" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.923520 4606 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xvsfn" Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.934173 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-69xjr"] Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.970147 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d"] Jan 23 17:02:00 crc kubenswrapper[4606]: I0123 17:02:00.976905 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z99zq"] Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.068489 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mzqh\" (UniqueName: \"kubernetes.io/projected/746ecbf9-06c9-43d1-91fe-2269b27278bb-kube-api-access-5mzqh\") pod \"cert-manager-858654f9db-69xjr\" (UID: \"746ecbf9-06c9-43d1-91fe-2269b27278bb\") " pod="cert-manager/cert-manager-858654f9db-69xjr" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.068554 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p94v6\" (UniqueName: \"kubernetes.io/projected/746ca208-2336-44fb-b3e3-be8fa94e33eb-kube-api-access-p94v6\") pod \"cert-manager-cainjector-cf98fcc89-nxh4d\" (UID: \"746ca208-2336-44fb-b3e3-be8fa94e33eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.068608 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxn5f\" (UniqueName: \"kubernetes.io/projected/0a2bd774-59e0-478d-b420-4b2017f3f6de-kube-api-access-qxn5f\") pod \"cert-manager-webhook-687f57d79b-z99zq\" (UID: \"0a2bd774-59e0-478d-b420-4b2017f3f6de\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.170029 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxn5f\" (UniqueName: \"kubernetes.io/projected/0a2bd774-59e0-478d-b420-4b2017f3f6de-kube-api-access-qxn5f\") pod \"cert-manager-webhook-687f57d79b-z99zq\" (UID: \"0a2bd774-59e0-478d-b420-4b2017f3f6de\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.170112 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mzqh\" (UniqueName: \"kubernetes.io/projected/746ecbf9-06c9-43d1-91fe-2269b27278bb-kube-api-access-5mzqh\") pod \"cert-manager-858654f9db-69xjr\" (UID: \"746ecbf9-06c9-43d1-91fe-2269b27278bb\") " pod="cert-manager/cert-manager-858654f9db-69xjr" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.170163 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p94v6\" (UniqueName: \"kubernetes.io/projected/746ca208-2336-44fb-b3e3-be8fa94e33eb-kube-api-access-p94v6\") pod \"cert-manager-cainjector-cf98fcc89-nxh4d\" (UID: \"746ca208-2336-44fb-b3e3-be8fa94e33eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.199442 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxn5f\" (UniqueName: \"kubernetes.io/projected/0a2bd774-59e0-478d-b420-4b2017f3f6de-kube-api-access-qxn5f\") pod \"cert-manager-webhook-687f57d79b-z99zq\" (UID: \"0a2bd774-59e0-478d-b420-4b2017f3f6de\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.200040 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mzqh\" (UniqueName: \"kubernetes.io/projected/746ecbf9-06c9-43d1-91fe-2269b27278bb-kube-api-access-5mzqh\") pod \"cert-manager-858654f9db-69xjr\" (UID: \"746ecbf9-06c9-43d1-91fe-2269b27278bb\") " pod="cert-manager/cert-manager-858654f9db-69xjr" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.202498 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p94v6\" (UniqueName: \"kubernetes.io/projected/746ca208-2336-44fb-b3e3-be8fa94e33eb-kube-api-access-p94v6\") pod \"cert-manager-cainjector-cf98fcc89-nxh4d\" (UID: \"746ca208-2336-44fb-b3e3-be8fa94e33eb\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.238644 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-69xjr" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.244296 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.496858 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.508623 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-69xjr"] Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.522685 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.678671 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z99zq"] Jan 23 17:02:01 crc kubenswrapper[4606]: W0123 17:02:01.686142 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a2bd774_59e0_478d_b420_4b2017f3f6de.slice/crio-db10a3ecebb6e3187ccffdb6690ef8d6939262bb13b9a66604a9bc11091002aa WatchSource:0}: Error finding container db10a3ecebb6e3187ccffdb6690ef8d6939262bb13b9a66604a9bc11091002aa: Status 404 returned error can't find the container with id db10a3ecebb6e3187ccffdb6690ef8d6939262bb13b9a66604a9bc11091002aa Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.717980 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d"] Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.977840 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" event={"ID":"746ca208-2336-44fb-b3e3-be8fa94e33eb","Type":"ContainerStarted","Data":"bb2aff34284ae7051370a50fea1ee7c14db844795317d69e2a4f27d680eb6947"} Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.978512 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" event={"ID":"0a2bd774-59e0-478d-b420-4b2017f3f6de","Type":"ContainerStarted","Data":"db10a3ecebb6e3187ccffdb6690ef8d6939262bb13b9a66604a9bc11091002aa"} Jan 23 17:02:01 crc kubenswrapper[4606]: I0123 17:02:01.979308 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-69xjr" event={"ID":"746ecbf9-06c9-43d1-91fe-2269b27278bb","Type":"ContainerStarted","Data":"02cd71253a270f8e50aeea19a44ee9b478c707ee5414e29bf60e12aedbda0ed7"} Jan 23 17:02:05 crc kubenswrapper[4606]: I0123 17:02:05.003088 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-69xjr" event={"ID":"746ecbf9-06c9-43d1-91fe-2269b27278bb","Type":"ContainerStarted","Data":"6f8579b993e8963f8b3f3a45e04ca9066caae74fe34595a6d2f938869f0bc586"} Jan 23 17:02:05 crc kubenswrapper[4606]: I0123 17:02:05.028932 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-69xjr" podStartSLOduration=2.190910454 podStartE2EDuration="5.028908599s" podCreationTimestamp="2026-01-23 17:02:00 +0000 UTC" firstStartedPulling="2026-01-23 17:02:01.521768631 +0000 UTC m=+555.521961037" lastFinishedPulling="2026-01-23 17:02:04.359766776 +0000 UTC m=+558.359959182" observedRunningTime="2026-01-23 17:02:05.020481365 +0000 UTC m=+559.020673771" watchObservedRunningTime="2026-01-23 17:02:05.028908599 +0000 UTC m=+559.029101005" Jan 23 17:02:06 crc kubenswrapper[4606]: I0123 17:02:06.009881 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" event={"ID":"746ca208-2336-44fb-b3e3-be8fa94e33eb","Type":"ContainerStarted","Data":"9cf330764c5d6ebf49c6f99641e1b6a5890e215fa61fdb8b7818fec16d979c4d"} Jan 23 17:02:06 crc kubenswrapper[4606]: I0123 17:02:06.012698 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" event={"ID":"0a2bd774-59e0-478d-b420-4b2017f3f6de","Type":"ContainerStarted","Data":"d9b4f994d713180ff268da632efd47ac4ac4ced49cf3aa79193494a4b11b3b08"} Jan 23 17:02:06 crc kubenswrapper[4606]: I0123 17:02:06.012840 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" Jan 23 17:02:06 crc kubenswrapper[4606]: I0123 17:02:06.055024 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" podStartSLOduration=3.36130729 podStartE2EDuration="6.054978566s" podCreationTimestamp="2026-01-23 17:02:00 +0000 UTC" firstStartedPulling="2026-01-23 17:02:01.724954557 +0000 UTC m=+555.725146963" lastFinishedPulling="2026-01-23 17:02:04.418625833 +0000 UTC m=+558.418818239" observedRunningTime="2026-01-23 17:02:06.030633767 +0000 UTC m=+560.030826173" watchObservedRunningTime="2026-01-23 17:02:06.054978566 +0000 UTC m=+560.055170972" Jan 23 17:02:06 crc kubenswrapper[4606]: I0123 17:02:06.061567 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" podStartSLOduration=2.326622874 podStartE2EDuration="6.061547063s" podCreationTimestamp="2026-01-23 17:02:00 +0000 UTC" firstStartedPulling="2026-01-23 17:02:01.688034039 +0000 UTC m=+555.688226445" lastFinishedPulling="2026-01-23 17:02:05.422958228 +0000 UTC m=+559.423150634" observedRunningTime="2026-01-23 17:02:06.053534379 +0000 UTC m=+560.053726805" watchObservedRunningTime="2026-01-23 17:02:06.061547063 +0000 UTC m=+560.061739469" Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.990743 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j9tqd"] Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.991627 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-controller" containerID="cri-o://b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" gracePeriod=30 Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.991680 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" gracePeriod=30 Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.991687 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-node" containerID="cri-o://ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" gracePeriod=30 Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.991677 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="nbdb" containerID="cri-o://7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" gracePeriod=30 Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.991792 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="sbdb" containerID="cri-o://9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" gracePeriod=30 Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.991802 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="northd" containerID="cri-o://0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" gracePeriod=30 Jan 23 17:02:09 crc kubenswrapper[4606]: I0123 17:02:09.991826 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-acl-logging" containerID="cri-o://9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" gracePeriod=30 Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.043728 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" containerID="cri-o://17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" gracePeriod=30 Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.247827 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 is running failed: container process not found" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.247827 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 is running failed: container process not found" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.248471 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 is running failed: container process not found" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.248532 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 is running failed: container process not found" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.248811 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 is running failed: container process not found" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.248840 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="nbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.248924 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 is running failed: container process not found" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.248944 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="sbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.336954 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/3.log" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.338485 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovn-acl-logging/0.log" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.338864 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovn-controller/0.log" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.339174 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.391827 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cppjm"] Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392068 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392087 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392102 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-node" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392112 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-node" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392122 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392131 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392140 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392149 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392159 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="sbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392168 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="sbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392180 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="northd" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392189 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="northd" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392199 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392207 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392219 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kubecfg-setup" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392228 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kubecfg-setup" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392242 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-acl-logging" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392251 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-acl-logging" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392267 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="nbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392275 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="nbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392285 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392293 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392407 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392419 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-node" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392428 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="sbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392437 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392448 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="northd" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392460 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392477 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="nbdb" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392488 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392498 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovn-acl-logging" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392507 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392642 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392653 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: E0123 17:02:10.392662 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392670 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392782 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.392795 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" containerName="ovnkube-controller" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.394828 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396722 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f37a36-f502-425c-a9e5-035bf475742b-ovn-node-metrics-cert\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396761 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-systemd\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396789 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-ovn-kubernetes\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396813 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-node-log\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396832 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396853 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-netns\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396877 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnghd\" (UniqueName: \"kubernetes.io/projected/a0f37a36-f502-425c-a9e5-035bf475742b-kube-api-access-wnghd\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396974 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-etc-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.396997 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-run-ovn-kubernetes\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397029 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397053 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovnkube-config\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397079 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-log-socket\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397103 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397130 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-ovn\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397130 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397149 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-cni-netd\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397180 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-node-log\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397199 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-run-netns\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397226 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7xk\" (UniqueName: \"kubernetes.io/projected/ef3a0177-9035-456d-8c02-b5ed5a76b78d-kube-api-access-xk7xk\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397247 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovnkube-script-lib\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397269 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-cni-bin\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397287 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-systemd-units\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397306 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-kubelet\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397329 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovn-node-metrics-cert\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397350 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-var-lib-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397369 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-env-overrides\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397395 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-slash\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397426 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-systemd\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.397467 4606 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.398927 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-node-log" (OuterVolumeSpecName: "node-log") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.398956 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.398992 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.402963 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f37a36-f502-425c-a9e5-035bf475742b-kube-api-access-wnghd" (OuterVolumeSpecName: "kube-api-access-wnghd") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "kube-api-access-wnghd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.403515 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f37a36-f502-425c-a9e5-035bf475742b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.414737 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.497994 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-var-lib-openvswitch\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498049 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-env-overrides\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498081 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-slash\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498113 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-config\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498141 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498165 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-etc-openvswitch\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498214 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498234 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-systemd-units\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498262 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-log-socket\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498284 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-ovn\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498319 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-netd\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498351 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-script-lib\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498365 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-openvswitch\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498379 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-bin\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498392 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-kubelet\") pod \"a0f37a36-f502-425c-a9e5-035bf475742b\" (UID: \"a0f37a36-f502-425c-a9e5-035bf475742b\") " Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498578 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498638 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovnkube-config\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498680 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-log-socket\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498711 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498728 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498754 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-ovn\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498775 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-cni-netd\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498781 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-slash" (OuterVolumeSpecName: "host-slash") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498847 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-cni-netd\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498886 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498892 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498920 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498919 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498937 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-log-socket" (OuterVolumeSpecName: "log-socket") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498931 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499005 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499028 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.498977 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499072 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-log-socket\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499103 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-node-log\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499279 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-run-netns\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499289 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499128 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-ovn\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499179 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-node-log\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499357 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7xk\" (UniqueName: \"kubernetes.io/projected/ef3a0177-9035-456d-8c02-b5ed5a76b78d-kube-api-access-xk7xk\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499398 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-run-netns\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499411 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovnkube-script-lib\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499453 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a0f37a36-f502-425c-a9e5-035bf475742b" (UID: "a0f37a36-f502-425c-a9e5-035bf475742b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499578 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-cni-bin\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499680 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-systemd-units\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499704 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-kubelet\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499740 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovn-node-metrics-cert\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499771 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-env-overrides\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499793 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-var-lib-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499831 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-slash\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499846 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-systemd-units\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499849 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovnkube-config\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499892 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-cni-bin\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499909 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-systemd\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499963 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-etc-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.499986 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-run-ovn-kubernetes\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500039 4606 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500052 4606 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500063 4606 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500074 4606 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500085 4606 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f37a36-f502-425c-a9e5-035bf475742b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500100 4606 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500112 4606 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500123 4606 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500135 4606 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500147 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnghd\" (UniqueName: \"kubernetes.io/projected/a0f37a36-f502-425c-a9e5-035bf475742b-kube-api-access-wnghd\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500158 4606 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500181 4606 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500193 4606 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500204 4606 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f37a36-f502-425c-a9e5-035bf475742b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500214 4606 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500225 4606 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500237 4606 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500248 4606 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500260 4606 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f37a36-f502-425c-a9e5-035bf475742b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500296 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-run-ovn-kubernetes\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500335 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-var-lib-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500367 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-slash\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500410 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-run-systemd\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500412 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-env-overrides\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500419 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovnkube-script-lib\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500443 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-etc-openvswitch\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.500448 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ef3a0177-9035-456d-8c02-b5ed5a76b78d-host-kubelet\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.504009 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef3a0177-9035-456d-8c02-b5ed5a76b78d-ovn-node-metrics-cert\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.516108 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7xk\" (UniqueName: \"kubernetes.io/projected/ef3a0177-9035-456d-8c02-b5ed5a76b78d-kube-api-access-xk7xk\") pod \"ovnkube-node-cppjm\" (UID: \"ef3a0177-9035-456d-8c02-b5ed5a76b78d\") " pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: I0123 17:02:10.713494 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:10 crc kubenswrapper[4606]: W0123 17:02:10.743396 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef3a0177_9035_456d_8c02_b5ed5a76b78d.slice/crio-45dda4b8d71c64598e55762973aa67c07443ed83be51d2e250a229f85fc1ad55 WatchSource:0}: Error finding container 45dda4b8d71c64598e55762973aa67c07443ed83be51d2e250a229f85fc1ad55: Status 404 returned error can't find the container with id 45dda4b8d71c64598e55762973aa67c07443ed83be51d2e250a229f85fc1ad55 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.045175 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovnkube-controller/3.log" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.047449 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovn-acl-logging/0.log" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.047951 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-j9tqd_a0f37a36-f502-425c-a9e5-035bf475742b/ovn-controller/0.log" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048280 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" exitCode=0 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048309 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" exitCode=0 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048319 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" exitCode=0 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048328 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" exitCode=0 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048335 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" exitCode=0 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048343 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" exitCode=0 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048351 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" exitCode=143 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048359 4606 generic.go:334] "Generic (PLEG): container finished" podID="a0f37a36-f502-425c-a9e5-035bf475742b" containerID="b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" exitCode=143 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048400 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048432 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048446 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048458 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048470 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048496 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048509 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048520 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048527 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048534 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048540 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048546 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048552 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048563 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048570 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048579 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048610 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048619 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048625 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048632 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048638 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048645 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048651 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048657 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048663 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048670 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048698 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048709 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048716 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048722 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048729 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048735 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048742 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048748 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048754 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048760 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048766 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048774 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" event={"ID":"a0f37a36-f502-425c-a9e5-035bf475742b","Type":"ContainerDied","Data":"5fbd4cd45169e64e1e8be6e728ae32c008d68bfcb32fe8a3df6fb7cacaa71497"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048784 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048791 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048797 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048803 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048809 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048815 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048821 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048827 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048833 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048840 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048856 4606 scope.go:117] "RemoveContainer" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.048879 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j9tqd" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.056922 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/2.log" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.058312 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/1.log" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.058356 4606 generic.go:334] "Generic (PLEG): container finished" podID="2692a882-403f-4842-b756-53ed7a38fa7f" containerID="fa71a124f07e22c3f4acfecce21b90563f36930a95ef0f7632c84065526c172e" exitCode=2 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.058400 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerDied","Data":"fa71a124f07e22c3f4acfecce21b90563f36930a95ef0f7632c84065526c172e"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.058470 4606 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.059197 4606 scope.go:117] "RemoveContainer" containerID="fa71a124f07e22c3f4acfecce21b90563f36930a95ef0f7632c84065526c172e" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.059698 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ftp8l_openshift-multus(2692a882-403f-4842-b756-53ed7a38fa7f)\"" pod="openshift-multus/multus-ftp8l" podUID="2692a882-403f-4842-b756-53ed7a38fa7f" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.061098 4606 generic.go:334] "Generic (PLEG): container finished" podID="ef3a0177-9035-456d-8c02-b5ed5a76b78d" containerID="139277bd6927b7a6510d09f94cdb166023a29da064891d75381ec201f86867b8" exitCode=0 Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.061130 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerDied","Data":"139277bd6927b7a6510d09f94cdb166023a29da064891d75381ec201f86867b8"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.061157 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"45dda4b8d71c64598e55762973aa67c07443ed83be51d2e250a229f85fc1ad55"} Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.079158 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.103966 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j9tqd"] Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.111982 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j9tqd"] Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.113339 4606 scope.go:117] "RemoveContainer" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.162626 4606 scope.go:117] "RemoveContainer" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.174476 4606 scope.go:117] "RemoveContainer" containerID="0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.201491 4606 scope.go:117] "RemoveContainer" containerID="143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.217168 4606 scope.go:117] "RemoveContainer" containerID="ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.231340 4606 scope.go:117] "RemoveContainer" containerID="9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.248194 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-z99zq" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.260300 4606 scope.go:117] "RemoveContainer" containerID="b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.281843 4606 scope.go:117] "RemoveContainer" containerID="be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.307872 4606 scope.go:117] "RemoveContainer" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.308290 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": container with ID starting with 17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce not found: ID does not exist" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.308358 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} err="failed to get container status \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": rpc error: code = NotFound desc = could not find container \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": container with ID starting with 17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.308389 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.309045 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": container with ID starting with 7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8 not found: ID does not exist" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.309075 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} err="failed to get container status \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": rpc error: code = NotFound desc = could not find container \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": container with ID starting with 7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.309093 4606 scope.go:117] "RemoveContainer" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.309313 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": container with ID starting with 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 not found: ID does not exist" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.309342 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} err="failed to get container status \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": rpc error: code = NotFound desc = could not find container \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": container with ID starting with 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.309362 4606 scope.go:117] "RemoveContainer" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.309773 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": container with ID starting with 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 not found: ID does not exist" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.309803 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} err="failed to get container status \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": rpc error: code = NotFound desc = could not find container \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": container with ID starting with 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.309821 4606 scope.go:117] "RemoveContainer" containerID="0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.310018 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": container with ID starting with 0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352 not found: ID does not exist" containerID="0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310045 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} err="failed to get container status \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": rpc error: code = NotFound desc = could not find container \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": container with ID starting with 0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310062 4606 scope.go:117] "RemoveContainer" containerID="143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.310258 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": container with ID starting with 143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36 not found: ID does not exist" containerID="143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310284 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} err="failed to get container status \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": rpc error: code = NotFound desc = could not find container \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": container with ID starting with 143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310302 4606 scope.go:117] "RemoveContainer" containerID="ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.310494 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": container with ID starting with ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5 not found: ID does not exist" containerID="ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310516 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} err="failed to get container status \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": rpc error: code = NotFound desc = could not find container \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": container with ID starting with ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310531 4606 scope.go:117] "RemoveContainer" containerID="9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.310722 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": container with ID starting with 9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe not found: ID does not exist" containerID="9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310742 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} err="failed to get container status \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": rpc error: code = NotFound desc = could not find container \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": container with ID starting with 9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310753 4606 scope.go:117] "RemoveContainer" containerID="b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.310898 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": container with ID starting with b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03 not found: ID does not exist" containerID="b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310916 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} err="failed to get container status \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": rpc error: code = NotFound desc = could not find container \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": container with ID starting with b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.310929 4606 scope.go:117] "RemoveContainer" containerID="be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6" Jan 23 17:02:11 crc kubenswrapper[4606]: E0123 17:02:11.311051 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": container with ID starting with be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6 not found: ID does not exist" containerID="be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.311070 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} err="failed to get container status \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": rpc error: code = NotFound desc = could not find container \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": container with ID starting with be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.311081 4606 scope.go:117] "RemoveContainer" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.311397 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} err="failed to get container status \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": rpc error: code = NotFound desc = could not find container \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": container with ID starting with 17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.311418 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.311702 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} err="failed to get container status \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": rpc error: code = NotFound desc = could not find container \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": container with ID starting with 7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.311815 4606 scope.go:117] "RemoveContainer" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.312211 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} err="failed to get container status \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": rpc error: code = NotFound desc = could not find container \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": container with ID starting with 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.312238 4606 scope.go:117] "RemoveContainer" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.312454 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} err="failed to get container status \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": rpc error: code = NotFound desc = could not find container \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": container with ID starting with 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.312480 4606 scope.go:117] "RemoveContainer" containerID="0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.312735 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} err="failed to get container status \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": rpc error: code = NotFound desc = could not find container \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": container with ID starting with 0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.312828 4606 scope.go:117] "RemoveContainer" containerID="143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.313141 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} err="failed to get container status \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": rpc error: code = NotFound desc = could not find container \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": container with ID starting with 143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.313167 4606 scope.go:117] "RemoveContainer" containerID="ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.313352 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} err="failed to get container status \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": rpc error: code = NotFound desc = could not find container \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": container with ID starting with ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.313379 4606 scope.go:117] "RemoveContainer" containerID="9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.313639 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} err="failed to get container status \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": rpc error: code = NotFound desc = could not find container \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": container with ID starting with 9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.313743 4606 scope.go:117] "RemoveContainer" containerID="b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314039 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} err="failed to get container status \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": rpc error: code = NotFound desc = could not find container \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": container with ID starting with b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314059 4606 scope.go:117] "RemoveContainer" containerID="be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314267 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} err="failed to get container status \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": rpc error: code = NotFound desc = could not find container \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": container with ID starting with be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314291 4606 scope.go:117] "RemoveContainer" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314499 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} err="failed to get container status \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": rpc error: code = NotFound desc = could not find container \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": container with ID starting with 17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314526 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314771 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} err="failed to get container status \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": rpc error: code = NotFound desc = could not find container \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": container with ID starting with 7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.314868 4606 scope.go:117] "RemoveContainer" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.315152 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} err="failed to get container status \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": rpc error: code = NotFound desc = could not find container \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": container with ID starting with 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.315181 4606 scope.go:117] "RemoveContainer" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.315397 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} err="failed to get container status \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": rpc error: code = NotFound desc = could not find container \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": container with ID starting with 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.315483 4606 scope.go:117] "RemoveContainer" containerID="0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.315836 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} err="failed to get container status \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": rpc error: code = NotFound desc = could not find container \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": container with ID starting with 0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.315862 4606 scope.go:117] "RemoveContainer" containerID="143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.316107 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} err="failed to get container status \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": rpc error: code = NotFound desc = could not find container \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": container with ID starting with 143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.316195 4606 scope.go:117] "RemoveContainer" containerID="ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.316477 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} err="failed to get container status \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": rpc error: code = NotFound desc = could not find container \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": container with ID starting with ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.316501 4606 scope.go:117] "RemoveContainer" containerID="9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.316769 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} err="failed to get container status \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": rpc error: code = NotFound desc = could not find container \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": container with ID starting with 9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.316788 4606 scope.go:117] "RemoveContainer" containerID="b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.317158 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} err="failed to get container status \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": rpc error: code = NotFound desc = could not find container \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": container with ID starting with b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.317180 4606 scope.go:117] "RemoveContainer" containerID="be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.317419 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} err="failed to get container status \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": rpc error: code = NotFound desc = could not find container \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": container with ID starting with be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.317511 4606 scope.go:117] "RemoveContainer" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.317862 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} err="failed to get container status \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": rpc error: code = NotFound desc = could not find container \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": container with ID starting with 17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.317891 4606 scope.go:117] "RemoveContainer" containerID="7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.318419 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8"} err="failed to get container status \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": rpc error: code = NotFound desc = could not find container \"7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8\": container with ID starting with 7f5fa59a84706d708509334251d7e40206df0007e5cb94d73146a025ea2e2af8 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.318445 4606 scope.go:117] "RemoveContainer" containerID="9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.318847 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5"} err="failed to get container status \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": rpc error: code = NotFound desc = could not find container \"9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5\": container with ID starting with 9cb5ffdac5944bd540a7b8f4754f89861a609695fa4b8bbf46b4fcc94c1a0bc5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.319059 4606 scope.go:117] "RemoveContainer" containerID="7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.319472 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14"} err="failed to get container status \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": rpc error: code = NotFound desc = could not find container \"7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14\": container with ID starting with 7318f2b965cb096f56705ec977be8cc80d35a40eafc9274014f8da706f95af14 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.319528 4606 scope.go:117] "RemoveContainer" containerID="0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.319947 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352"} err="failed to get container status \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": rpc error: code = NotFound desc = could not find container \"0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352\": container with ID starting with 0eff92c72703b48c36e7b736b1ef3d96f8352a91308e204b792233c2092e5352 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.320357 4606 scope.go:117] "RemoveContainer" containerID="143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.320702 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36"} err="failed to get container status \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": rpc error: code = NotFound desc = could not find container \"143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36\": container with ID starting with 143ce46422f721f9c4cfe2fdfe50a68c1bf8c0e215fa33126d518ced22a08d36 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.320729 4606 scope.go:117] "RemoveContainer" containerID="ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.320962 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5"} err="failed to get container status \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": rpc error: code = NotFound desc = could not find container \"ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5\": container with ID starting with ab1444ae45920357f78f93dc5d82b73b6b22d708297102699bc63d6135a228e5 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321052 4606 scope.go:117] "RemoveContainer" containerID="9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321420 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe"} err="failed to get container status \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": rpc error: code = NotFound desc = could not find container \"9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe\": container with ID starting with 9c4aa1026bfc45c328675d17cc5c2b8ab1e064c33c694fe5bd9300e0273a18fe not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321444 4606 scope.go:117] "RemoveContainer" containerID="b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321627 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03"} err="failed to get container status \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": rpc error: code = NotFound desc = could not find container \"b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03\": container with ID starting with b82187e2e2b9f27a37bcf507d89f6103d868846b66d5aa70781f8f7f0c611f03 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321642 4606 scope.go:117] "RemoveContainer" containerID="be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321792 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6"} err="failed to get container status \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": rpc error: code = NotFound desc = could not find container \"be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6\": container with ID starting with be47cec019fce2ddfe7ed679673e57298329d67811b7cbacedff03b480fdead6 not found: ID does not exist" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321806 4606 scope.go:117] "RemoveContainer" containerID="17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce" Jan 23 17:02:11 crc kubenswrapper[4606]: I0123 17:02:11.321954 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce"} err="failed to get container status \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": rpc error: code = NotFound desc = could not find container \"17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce\": container with ID starting with 17e37a119bb4d7f97ae127e24e5405535ae250f98b76f92518df5b8ebcaf67ce not found: ID does not exist" Jan 23 17:02:12 crc kubenswrapper[4606]: I0123 17:02:12.071986 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"80fb211e39b0a7a8783d0b52726b76b1f5b10215c524de79821c855c853f597d"} Jan 23 17:02:12 crc kubenswrapper[4606]: I0123 17:02:12.072344 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"7091cadf187d54c4819774a2820e061a38a9207a89f00faf24fa2f34b8c0fba9"} Jan 23 17:02:12 crc kubenswrapper[4606]: I0123 17:02:12.072360 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"596e10ded5583e8a647df0d4d7f3f2990a07fcf3849b2630b772fd3a8a0db98a"} Jan 23 17:02:12 crc kubenswrapper[4606]: I0123 17:02:12.072374 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"56f338f7d6a8d0c64bed09b66127b93d32ba7092c46bab0f623dabff10b9974a"} Jan 23 17:02:12 crc kubenswrapper[4606]: I0123 17:02:12.072384 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"53e1fedf98ae9cac3d797f38ea99436eba256670b52d7fc805c29bdb42650e0c"} Jan 23 17:02:12 crc kubenswrapper[4606]: I0123 17:02:12.072394 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"9566bfff22346980114450e19d3c2918308c4677eca933af32a794e893510b0e"} Jan 23 17:02:12 crc kubenswrapper[4606]: I0123 17:02:12.371335 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0f37a36-f502-425c-a9e5-035bf475742b" path="/var/lib/kubelet/pods/a0f37a36-f502-425c-a9e5-035bf475742b/volumes" Jan 23 17:02:14 crc kubenswrapper[4606]: I0123 17:02:14.090145 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"ada4673bf85538c34d1a4fb6f742e0e7727cb8adb550beffb6e97f0d13caa14d"} Jan 23 17:02:17 crc kubenswrapper[4606]: I0123 17:02:17.109947 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" event={"ID":"ef3a0177-9035-456d-8c02-b5ed5a76b78d","Type":"ContainerStarted","Data":"2a0f7d17130bee9b14819681ad4518dcaf4c1af308f019bf90b7f335858de9ef"} Jan 23 17:02:17 crc kubenswrapper[4606]: I0123 17:02:17.110522 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:17 crc kubenswrapper[4606]: I0123 17:02:17.110534 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:17 crc kubenswrapper[4606]: I0123 17:02:17.110545 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:17 crc kubenswrapper[4606]: I0123 17:02:17.143698 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:17 crc kubenswrapper[4606]: I0123 17:02:17.149558 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" podStartSLOduration=7.149528292 podStartE2EDuration="7.149528292s" podCreationTimestamp="2026-01-23 17:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:02:17.144491173 +0000 UTC m=+571.144683579" watchObservedRunningTime="2026-01-23 17:02:17.149528292 +0000 UTC m=+571.149720698" Jan 23 17:02:17 crc kubenswrapper[4606]: I0123 17:02:17.153847 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:23 crc kubenswrapper[4606]: I0123 17:02:23.365788 4606 scope.go:117] "RemoveContainer" containerID="fa71a124f07e22c3f4acfecce21b90563f36930a95ef0f7632c84065526c172e" Jan 23 17:02:23 crc kubenswrapper[4606]: E0123 17:02:23.366680 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ftp8l_openshift-multus(2692a882-403f-4842-b756-53ed7a38fa7f)\"" pod="openshift-multus/multus-ftp8l" podUID="2692a882-403f-4842-b756-53ed7a38fa7f" Jan 23 17:02:29 crc kubenswrapper[4606]: I0123 17:02:29.933759 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:02:29 crc kubenswrapper[4606]: I0123 17:02:29.934761 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:02:34 crc kubenswrapper[4606]: I0123 17:02:34.365360 4606 scope.go:117] "RemoveContainer" containerID="fa71a124f07e22c3f4acfecce21b90563f36930a95ef0f7632c84065526c172e" Jan 23 17:02:35 crc kubenswrapper[4606]: I0123 17:02:35.215516 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/2.log" Jan 23 17:02:35 crc kubenswrapper[4606]: I0123 17:02:35.216364 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/1.log" Jan 23 17:02:35 crc kubenswrapper[4606]: I0123 17:02:35.216445 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ftp8l" event={"ID":"2692a882-403f-4842-b756-53ed7a38fa7f","Type":"ContainerStarted","Data":"0dbe802c93c3154eb8598a7bf7e319458161d4996676b66e5da6a86ed4b44a3d"} Jan 23 17:02:40 crc kubenswrapper[4606]: I0123 17:02:40.746332 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cppjm" Jan 23 17:02:46 crc kubenswrapper[4606]: I0123 17:02:46.545930 4606 scope.go:117] "RemoveContainer" containerID="e1c017382ddcc3e55e81de6fed924558f181ebf81f3e1cdf9323f5c7e89401f1" Jan 23 17:02:47 crc kubenswrapper[4606]: I0123 17:02:47.289666 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ftp8l_2692a882-403f-4842-b756-53ed7a38fa7f/kube-multus/2.log" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.691739 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8"] Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.693422 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.695760 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.702486 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8"] Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.883300 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84lv\" (UniqueName: \"kubernetes.io/projected/82b8bc6e-7be2-4e25-89f7-6cc353562560-kube-api-access-g84lv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.883354 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.883431 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.984536 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g84lv\" (UniqueName: \"kubernetes.io/projected/82b8bc6e-7be2-4e25-89f7-6cc353562560-kube-api-access-g84lv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.984751 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.984878 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.985411 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:56 crc kubenswrapper[4606]: I0123 17:02:56.985427 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:57 crc kubenswrapper[4606]: I0123 17:02:57.006823 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g84lv\" (UniqueName: \"kubernetes.io/projected/82b8bc6e-7be2-4e25-89f7-6cc353562560-kube-api-access-g84lv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:57 crc kubenswrapper[4606]: I0123 17:02:57.010460 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:02:57 crc kubenswrapper[4606]: I0123 17:02:57.222148 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8"] Jan 23 17:02:57 crc kubenswrapper[4606]: I0123 17:02:57.343711 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" event={"ID":"82b8bc6e-7be2-4e25-89f7-6cc353562560","Type":"ContainerStarted","Data":"c21a657fdb193f36533b5c1349e9b49446c91920afb1696367c3fd37cdbd2f97"} Jan 23 17:02:58 crc kubenswrapper[4606]: I0123 17:02:58.350647 4606 generic.go:334] "Generic (PLEG): container finished" podID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerID="72e7b178372625de73b26441fcbdc8f4796fa6d1db8e47665a1b02b3d552045b" exitCode=0 Jan 23 17:02:58 crc kubenswrapper[4606]: I0123 17:02:58.350818 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" event={"ID":"82b8bc6e-7be2-4e25-89f7-6cc353562560","Type":"ContainerDied","Data":"72e7b178372625de73b26441fcbdc8f4796fa6d1db8e47665a1b02b3d552045b"} Jan 23 17:02:59 crc kubenswrapper[4606]: I0123 17:02:59.933840 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:02:59 crc kubenswrapper[4606]: I0123 17:02:59.934191 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:03:00 crc kubenswrapper[4606]: I0123 17:03:00.364619 4606 generic.go:334] "Generic (PLEG): container finished" podID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerID="5dfd144135887276dd6f4048df85b14529a5053cc507db8629a06a495f7b1631" exitCode=0 Jan 23 17:03:00 crc kubenswrapper[4606]: I0123 17:03:00.373644 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" event={"ID":"82b8bc6e-7be2-4e25-89f7-6cc353562560","Type":"ContainerDied","Data":"5dfd144135887276dd6f4048df85b14529a5053cc507db8629a06a495f7b1631"} Jan 23 17:03:01 crc kubenswrapper[4606]: I0123 17:03:01.375448 4606 generic.go:334] "Generic (PLEG): container finished" podID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerID="04950d3d52f9675deb14f7bd013dee917446c6cce66ca9eacbaa84486f6797f3" exitCode=0 Jan 23 17:03:01 crc kubenswrapper[4606]: I0123 17:03:01.375503 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" event={"ID":"82b8bc6e-7be2-4e25-89f7-6cc353562560","Type":"ContainerDied","Data":"04950d3d52f9675deb14f7bd013dee917446c6cce66ca9eacbaa84486f6797f3"} Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.628949 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.670133 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-bundle\") pod \"82b8bc6e-7be2-4e25-89f7-6cc353562560\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.670203 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-util\") pod \"82b8bc6e-7be2-4e25-89f7-6cc353562560\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.674260 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g84lv\" (UniqueName: \"kubernetes.io/projected/82b8bc6e-7be2-4e25-89f7-6cc353562560-kube-api-access-g84lv\") pod \"82b8bc6e-7be2-4e25-89f7-6cc353562560\" (UID: \"82b8bc6e-7be2-4e25-89f7-6cc353562560\") " Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.670760 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-bundle" (OuterVolumeSpecName: "bundle") pod "82b8bc6e-7be2-4e25-89f7-6cc353562560" (UID: "82b8bc6e-7be2-4e25-89f7-6cc353562560"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.674693 4606 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.681785 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b8bc6e-7be2-4e25-89f7-6cc353562560-kube-api-access-g84lv" (OuterVolumeSpecName: "kube-api-access-g84lv") pod "82b8bc6e-7be2-4e25-89f7-6cc353562560" (UID: "82b8bc6e-7be2-4e25-89f7-6cc353562560"). InnerVolumeSpecName "kube-api-access-g84lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.775673 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g84lv\" (UniqueName: \"kubernetes.io/projected/82b8bc6e-7be2-4e25-89f7-6cc353562560-kube-api-access-g84lv\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.805638 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-util" (OuterVolumeSpecName: "util") pod "82b8bc6e-7be2-4e25-89f7-6cc353562560" (UID: "82b8bc6e-7be2-4e25-89f7-6cc353562560"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:02 crc kubenswrapper[4606]: I0123 17:03:02.877563 4606 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82b8bc6e-7be2-4e25-89f7-6cc353562560-util\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:03 crc kubenswrapper[4606]: I0123 17:03:03.388434 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" event={"ID":"82b8bc6e-7be2-4e25-89f7-6cc353562560","Type":"ContainerDied","Data":"c21a657fdb193f36533b5c1349e9b49446c91920afb1696367c3fd37cdbd2f97"} Jan 23 17:03:03 crc kubenswrapper[4606]: I0123 17:03:03.388481 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c21a657fdb193f36533b5c1349e9b49446c91920afb1696367c3fd37cdbd2f97" Jan 23 17:03:03 crc kubenswrapper[4606]: I0123 17:03:03.388488 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.261920 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pljkp"] Jan 23 17:03:08 crc kubenswrapper[4606]: E0123 17:03:08.263326 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerName="extract" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.263348 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerName="extract" Jan 23 17:03:08 crc kubenswrapper[4606]: E0123 17:03:08.263365 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerName="util" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.263373 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerName="util" Jan 23 17:03:08 crc kubenswrapper[4606]: E0123 17:03:08.263397 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerName="pull" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.263408 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerName="pull" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.263546 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b8bc6e-7be2-4e25-89f7-6cc353562560" containerName="extract" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.264153 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.266721 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2d8qz" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.268846 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.269710 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.275133 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pljkp"] Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.345483 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvpr\" (UniqueName: \"kubernetes.io/projected/880e0d16-fda7-42d7-8a25-423c4b200c8d-kube-api-access-clvpr\") pod \"nmstate-operator-646758c888-pljkp\" (UID: \"880e0d16-fda7-42d7-8a25-423c4b200c8d\") " pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.447233 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvpr\" (UniqueName: \"kubernetes.io/projected/880e0d16-fda7-42d7-8a25-423c4b200c8d-kube-api-access-clvpr\") pod \"nmstate-operator-646758c888-pljkp\" (UID: \"880e0d16-fda7-42d7-8a25-423c4b200c8d\") " pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.466182 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvpr\" (UniqueName: \"kubernetes.io/projected/880e0d16-fda7-42d7-8a25-423c4b200c8d-kube-api-access-clvpr\") pod \"nmstate-operator-646758c888-pljkp\" (UID: \"880e0d16-fda7-42d7-8a25-423c4b200c8d\") " pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" Jan 23 17:03:08 crc kubenswrapper[4606]: I0123 17:03:08.583220 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" Jan 23 17:03:09 crc kubenswrapper[4606]: I0123 17:03:09.018642 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pljkp"] Jan 23 17:03:09 crc kubenswrapper[4606]: I0123 17:03:09.420310 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" event={"ID":"880e0d16-fda7-42d7-8a25-423c4b200c8d","Type":"ContainerStarted","Data":"8e3b2abc37c1a0b0a531e562f714c7a08ac577076326ab8f681e079396ce8415"} Jan 23 17:03:12 crc kubenswrapper[4606]: I0123 17:03:12.435320 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" event={"ID":"880e0d16-fda7-42d7-8a25-423c4b200c8d","Type":"ContainerStarted","Data":"c65d8400f7624c8a3d16502070b0a1f60f82f80831c4e624460ad384d40c9938"} Jan 23 17:03:12 crc kubenswrapper[4606]: I0123 17:03:12.456215 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pljkp" podStartSLOduration=1.981421453 podStartE2EDuration="4.456195415s" podCreationTimestamp="2026-01-23 17:03:08 +0000 UTC" firstStartedPulling="2026-01-23 17:03:09.027082859 +0000 UTC m=+623.027275255" lastFinishedPulling="2026-01-23 17:03:11.501856811 +0000 UTC m=+625.502049217" observedRunningTime="2026-01-23 17:03:12.451063503 +0000 UTC m=+626.451255939" watchObservedRunningTime="2026-01-23 17:03:12.456195415 +0000 UTC m=+626.456387821" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.285485 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ksm95"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.286900 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.289275 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5n9fm" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.300675 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.301517 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.303045 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.310876 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ksm95"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.324205 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.356010 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-n8gtl"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.356925 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.358315 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcfd4\" (UniqueName: \"kubernetes.io/projected/e53998dd-17ef-4f3f-989b-2304bc9357aa-kube-api-access-gcfd4\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.358369 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-dbus-socket\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.358424 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw2rw\" (UniqueName: \"kubernetes.io/projected/ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8-kube-api-access-tw2rw\") pod \"nmstate-metrics-54757c584b-ksm95\" (UID: \"ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.358477 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc45769-a4ca-4c0e-aafd-138134ecd07e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lctvz\" (UID: \"3cc45769-a4ca-4c0e-aafd-138134ecd07e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.358505 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h992z\" (UniqueName: \"kubernetes.io/projected/3cc45769-a4ca-4c0e-aafd-138134ecd07e-kube-api-access-h992z\") pod \"nmstate-webhook-8474b5b9d8-lctvz\" (UID: \"3cc45769-a4ca-4c0e-aafd-138134ecd07e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.358531 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-nmstate-lock\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.358565 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-ovs-socket\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459050 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h992z\" (UniqueName: \"kubernetes.io/projected/3cc45769-a4ca-4c0e-aafd-138134ecd07e-kube-api-access-h992z\") pod \"nmstate-webhook-8474b5b9d8-lctvz\" (UID: \"3cc45769-a4ca-4c0e-aafd-138134ecd07e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459087 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc45769-a4ca-4c0e-aafd-138134ecd07e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lctvz\" (UID: \"3cc45769-a4ca-4c0e-aafd-138134ecd07e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459109 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-nmstate-lock\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459136 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-ovs-socket\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459159 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-dbus-socket\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459174 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcfd4\" (UniqueName: \"kubernetes.io/projected/e53998dd-17ef-4f3f-989b-2304bc9357aa-kube-api-access-gcfd4\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459211 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw2rw\" (UniqueName: \"kubernetes.io/projected/ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8-kube-api-access-tw2rw\") pod \"nmstate-metrics-54757c584b-ksm95\" (UID: \"ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459249 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-nmstate-lock\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459432 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-dbus-socket\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.459625 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e53998dd-17ef-4f3f-989b-2304bc9357aa-ovs-socket\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.476972 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h992z\" (UniqueName: \"kubernetes.io/projected/3cc45769-a4ca-4c0e-aafd-138134ecd07e-kube-api-access-h992z\") pod \"nmstate-webhook-8474b5b9d8-lctvz\" (UID: \"3cc45769-a4ca-4c0e-aafd-138134ecd07e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.477412 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc45769-a4ca-4c0e-aafd-138134ecd07e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lctvz\" (UID: \"3cc45769-a4ca-4c0e-aafd-138134ecd07e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.485860 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw2rw\" (UniqueName: \"kubernetes.io/projected/ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8-kube-api-access-tw2rw\") pod \"nmstate-metrics-54757c584b-ksm95\" (UID: \"ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.487968 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcfd4\" (UniqueName: \"kubernetes.io/projected/e53998dd-17ef-4f3f-989b-2304bc9357aa-kube-api-access-gcfd4\") pod \"nmstate-handler-n8gtl\" (UID: \"e53998dd-17ef-4f3f-989b-2304bc9357aa\") " pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.505636 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.506234 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.508690 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.510739 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pwm5q" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.511874 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.528374 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.560288 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3767343e-b558-43fb-a813-4ddac7e562a6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.560335 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2v9s\" (UniqueName: \"kubernetes.io/projected/3767343e-b558-43fb-a813-4ddac7e562a6-kube-api-access-b2v9s\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.560382 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3767343e-b558-43fb-a813-4ddac7e562a6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.601743 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.615692 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.661783 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3767343e-b558-43fb-a813-4ddac7e562a6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.661849 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2v9s\" (UniqueName: \"kubernetes.io/projected/3767343e-b558-43fb-a813-4ddac7e562a6-kube-api-access-b2v9s\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.661928 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3767343e-b558-43fb-a813-4ddac7e562a6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.662960 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3767343e-b558-43fb-a813-4ddac7e562a6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.665912 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3767343e-b558-43fb-a813-4ddac7e562a6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.681929 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.694207 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cb9595f68-pspfg"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.695298 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.711576 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2v9s\" (UniqueName: \"kubernetes.io/projected/3767343e-b558-43fb-a813-4ddac7e562a6-kube-api-access-b2v9s\") pod \"nmstate-console-plugin-7754f76f8b-zx7nw\" (UID: \"3767343e-b558-43fb-a813-4ddac7e562a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.713351 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cb9595f68-pspfg"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.763578 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528b7cc3-52db-443d-8958-d6746bd4334f-console-serving-cert\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.763802 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-console-config\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.763833 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-service-ca\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.763921 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528b7cc3-52db-443d-8958-d6746bd4334f-console-oauth-config\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.764007 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-trusted-ca-bundle\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.764070 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-oauth-serving-cert\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.764156 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9rx\" (UniqueName: \"kubernetes.io/projected/528b7cc3-52db-443d-8958-d6746bd4334f-kube-api-access-bn9rx\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.849689 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.869296 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-oauth-serving-cert\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.869361 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn9rx\" (UniqueName: \"kubernetes.io/projected/528b7cc3-52db-443d-8958-d6746bd4334f-kube-api-access-bn9rx\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.869407 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528b7cc3-52db-443d-8958-d6746bd4334f-console-serving-cert\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.869434 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-console-config\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.869461 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-service-ca\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.869493 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528b7cc3-52db-443d-8958-d6746bd4334f-console-oauth-config\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.869524 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-trusted-ca-bundle\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.870561 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-trusted-ca-bundle\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.871209 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-oauth-serving-cert\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.872551 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-console-config\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.872981 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528b7cc3-52db-443d-8958-d6746bd4334f-service-ca\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.877105 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528b7cc3-52db-443d-8958-d6746bd4334f-console-serving-cert\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.879292 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528b7cc3-52db-443d-8958-d6746bd4334f-console-oauth-config\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.893366 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn9rx\" (UniqueName: \"kubernetes.io/projected/528b7cc3-52db-443d-8958-d6746bd4334f-kube-api-access-bn9rx\") pod \"console-7cb9595f68-pspfg\" (UID: \"528b7cc3-52db-443d-8958-d6746bd4334f\") " pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.916087 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz"] Jan 23 17:03:17 crc kubenswrapper[4606]: I0123 17:03:17.982173 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ksm95"] Jan 23 17:03:17 crc kubenswrapper[4606]: W0123 17:03:17.985778 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac289554_15c0_4b1d_a8f9_7e1e1ccb93b8.slice/crio-48dfe5edb771850e11f00765784ec1d9e38051a6024584ac8e19070c039d7e99 WatchSource:0}: Error finding container 48dfe5edb771850e11f00765784ec1d9e38051a6024584ac8e19070c039d7e99: Status 404 returned error can't find the container with id 48dfe5edb771850e11f00765784ec1d9e38051a6024584ac8e19070c039d7e99 Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.018135 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.186386 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cb9595f68-pspfg"] Jan 23 17:03:18 crc kubenswrapper[4606]: W0123 17:03:18.192013 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528b7cc3_52db_443d_8958_d6746bd4334f.slice/crio-142627c47ad3fc89859d836b29caa70ca1370e9850bbceb1c6231d99cf231218 WatchSource:0}: Error finding container 142627c47ad3fc89859d836b29caa70ca1370e9850bbceb1c6231d99cf231218: Status 404 returned error can't find the container with id 142627c47ad3fc89859d836b29caa70ca1370e9850bbceb1c6231d99cf231218 Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.268709 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw"] Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.480390 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cb9595f68-pspfg" event={"ID":"528b7cc3-52db-443d-8958-d6746bd4334f","Type":"ContainerStarted","Data":"142627c47ad3fc89859d836b29caa70ca1370e9850bbceb1c6231d99cf231218"} Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.481485 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" event={"ID":"3767343e-b558-43fb-a813-4ddac7e562a6","Type":"ContainerStarted","Data":"1f6ae21e9532eb31347b5a6aa1a129e2406d0d6816794d1361eabbe571f07948"} Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.482521 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-n8gtl" event={"ID":"e53998dd-17ef-4f3f-989b-2304bc9357aa","Type":"ContainerStarted","Data":"456e914ebc4a630d33c5e46028c797cbba3b458ca320cd5bf6e858de651715a4"} Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.483629 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" event={"ID":"ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8","Type":"ContainerStarted","Data":"48dfe5edb771850e11f00765784ec1d9e38051a6024584ac8e19070c039d7e99"} Jan 23 17:03:18 crc kubenswrapper[4606]: I0123 17:03:18.484634 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" event={"ID":"3cc45769-a4ca-4c0e-aafd-138134ecd07e","Type":"ContainerStarted","Data":"3e9a047deac42ae9f06786a5f7bc6133171c2ce9f2640def50aa449b39910c6d"} Jan 23 17:03:19 crc kubenswrapper[4606]: I0123 17:03:19.490325 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cb9595f68-pspfg" event={"ID":"528b7cc3-52db-443d-8958-d6746bd4334f","Type":"ContainerStarted","Data":"f3c471896568b668682c1a145e8c7b9ea8eb881ccbdba848f209122bb334e16d"} Jan 23 17:03:19 crc kubenswrapper[4606]: I0123 17:03:19.511160 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cb9595f68-pspfg" podStartSLOduration=2.5111446859999997 podStartE2EDuration="2.511144686s" podCreationTimestamp="2026-01-23 17:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:03:19.510161582 +0000 UTC m=+633.510353988" watchObservedRunningTime="2026-01-23 17:03:19.511144686 +0000 UTC m=+633.511337092" Jan 23 17:03:21 crc kubenswrapper[4606]: I0123 17:03:21.517193 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" event={"ID":"ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8","Type":"ContainerStarted","Data":"577204aa8f9cdba1e2a727489611985be323dd5332223fa4661244abb62753b6"} Jan 23 17:03:21 crc kubenswrapper[4606]: I0123 17:03:21.518857 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" event={"ID":"3cc45769-a4ca-4c0e-aafd-138134ecd07e","Type":"ContainerStarted","Data":"42d91864e88ede37ea43b0d2f65600ee029666d42b55b52161933731958d2437"} Jan 23 17:03:21 crc kubenswrapper[4606]: I0123 17:03:21.520826 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:21 crc kubenswrapper[4606]: I0123 17:03:21.525139 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-n8gtl" event={"ID":"e53998dd-17ef-4f3f-989b-2304bc9357aa","Type":"ContainerStarted","Data":"4baa539eed78b28cdff2453c7a3fa2482789365834843dc94c6e8643330d127d"} Jan 23 17:03:21 crc kubenswrapper[4606]: I0123 17:03:21.525261 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:21 crc kubenswrapper[4606]: I0123 17:03:21.548892 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" podStartSLOduration=2.095716193 podStartE2EDuration="4.548870203s" podCreationTimestamp="2026-01-23 17:03:17 +0000 UTC" firstStartedPulling="2026-01-23 17:03:17.930703173 +0000 UTC m=+631.930895579" lastFinishedPulling="2026-01-23 17:03:20.383857183 +0000 UTC m=+634.384049589" observedRunningTime="2026-01-23 17:03:21.545082296 +0000 UTC m=+635.545274722" watchObservedRunningTime="2026-01-23 17:03:21.548870203 +0000 UTC m=+635.549062609" Jan 23 17:03:21 crc kubenswrapper[4606]: I0123 17:03:21.577985 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-n8gtl" podStartSLOduration=1.979540482 podStartE2EDuration="4.577967776s" podCreationTimestamp="2026-01-23 17:03:17 +0000 UTC" firstStartedPulling="2026-01-23 17:03:17.750936506 +0000 UTC m=+631.751128912" lastFinishedPulling="2026-01-23 17:03:20.3493638 +0000 UTC m=+634.349556206" observedRunningTime="2026-01-23 17:03:21.574117688 +0000 UTC m=+635.574310094" watchObservedRunningTime="2026-01-23 17:03:21.577967776 +0000 UTC m=+635.578160182" Jan 23 17:03:22 crc kubenswrapper[4606]: I0123 17:03:22.541664 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" event={"ID":"3767343e-b558-43fb-a813-4ddac7e562a6","Type":"ContainerStarted","Data":"5e862ccd9c496567537c07cb270ae783f76a58aac82a27a6869807318bbba730"} Jan 23 17:03:22 crc kubenswrapper[4606]: I0123 17:03:22.558212 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zx7nw" podStartSLOduration=2.310172888 podStartE2EDuration="5.558196182s" podCreationTimestamp="2026-01-23 17:03:17 +0000 UTC" firstStartedPulling="2026-01-23 17:03:18.278506177 +0000 UTC m=+632.278698573" lastFinishedPulling="2026-01-23 17:03:21.526529451 +0000 UTC m=+635.526721867" observedRunningTime="2026-01-23 17:03:22.554553149 +0000 UTC m=+636.554745555" watchObservedRunningTime="2026-01-23 17:03:22.558196182 +0000 UTC m=+636.558388588" Jan 23 17:03:23 crc kubenswrapper[4606]: I0123 17:03:23.547395 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" event={"ID":"ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8","Type":"ContainerStarted","Data":"2db63492d3c5d7128ad35e405b14a307d08e57496ba6b4f200253b3701987dc8"} Jan 23 17:03:23 crc kubenswrapper[4606]: I0123 17:03:23.567369 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ksm95" podStartSLOduration=1.4051695450000001 podStartE2EDuration="6.567353246s" podCreationTimestamp="2026-01-23 17:03:17 +0000 UTC" firstStartedPulling="2026-01-23 17:03:17.987954777 +0000 UTC m=+631.988147183" lastFinishedPulling="2026-01-23 17:03:23.150138478 +0000 UTC m=+637.150330884" observedRunningTime="2026-01-23 17:03:23.563694404 +0000 UTC m=+637.563886830" watchObservedRunningTime="2026-01-23 17:03:23.567353246 +0000 UTC m=+637.567545652" Jan 23 17:03:27 crc kubenswrapper[4606]: I0123 17:03:27.706497 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-n8gtl" Jan 23 17:03:28 crc kubenswrapper[4606]: I0123 17:03:28.018664 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:28 crc kubenswrapper[4606]: I0123 17:03:28.018705 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:28 crc kubenswrapper[4606]: I0123 17:03:28.022831 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:28 crc kubenswrapper[4606]: I0123 17:03:28.582867 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cb9595f68-pspfg" Jan 23 17:03:28 crc kubenswrapper[4606]: I0123 17:03:28.659014 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wlpwz"] Jan 23 17:03:29 crc kubenswrapper[4606]: I0123 17:03:29.934050 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:03:29 crc kubenswrapper[4606]: I0123 17:03:29.934733 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:03:29 crc kubenswrapper[4606]: I0123 17:03:29.934792 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:03:29 crc kubenswrapper[4606]: I0123 17:03:29.935241 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bf772329cdfa9ebae83dfcb79df9cfd8a68a9a3c277497c6d856cec3d4e5cf8"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:03:29 crc kubenswrapper[4606]: I0123 17:03:29.935303 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://6bf772329cdfa9ebae83dfcb79df9cfd8a68a9a3c277497c6d856cec3d4e5cf8" gracePeriod=600 Jan 23 17:03:30 crc kubenswrapper[4606]: I0123 17:03:30.593445 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="6bf772329cdfa9ebae83dfcb79df9cfd8a68a9a3c277497c6d856cec3d4e5cf8" exitCode=0 Jan 23 17:03:30 crc kubenswrapper[4606]: I0123 17:03:30.593804 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"6bf772329cdfa9ebae83dfcb79df9cfd8a68a9a3c277497c6d856cec3d4e5cf8"} Jan 23 17:03:30 crc kubenswrapper[4606]: I0123 17:03:30.593833 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"8f3c9d6abf9f7db362dce7fd403f9c2cfbf053099332de8a97b1275b83560c1e"} Jan 23 17:03:30 crc kubenswrapper[4606]: I0123 17:03:30.593850 4606 scope.go:117] "RemoveContainer" containerID="8cf4b07e2ad3950b42f8b972aeb9bf7966a5e6fa914bd47d93edb65756e33cc8" Jan 23 17:03:37 crc kubenswrapper[4606]: I0123 17:03:37.622499 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lctvz" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.017360 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm"] Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.019295 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.049866 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.063686 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm"] Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.114547 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.115103 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.115204 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78qn\" (UniqueName: \"kubernetes.io/projected/41aa3578-313b-4e05-a97c-778b0ad81093-kube-api-access-b78qn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.216828 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.216907 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.216940 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78qn\" (UniqueName: \"kubernetes.io/projected/41aa3578-313b-4e05-a97c-778b0ad81093-kube-api-access-b78qn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.217327 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.217495 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.245710 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78qn\" (UniqueName: \"kubernetes.io/projected/41aa3578-313b-4e05-a97c-778b0ad81093-kube-api-access-b78qn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.356003 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:50 crc kubenswrapper[4606]: I0123 17:03:50.796754 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm"] Jan 23 17:03:51 crc kubenswrapper[4606]: I0123 17:03:51.784129 4606 generic.go:334] "Generic (PLEG): container finished" podID="41aa3578-313b-4e05-a97c-778b0ad81093" containerID="111a7fb6c050760458e0f367f88970346aadb5b4ccd6cd82331494192cbc16c3" exitCode=0 Jan 23 17:03:51 crc kubenswrapper[4606]: I0123 17:03:51.784182 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" event={"ID":"41aa3578-313b-4e05-a97c-778b0ad81093","Type":"ContainerDied","Data":"111a7fb6c050760458e0f367f88970346aadb5b4ccd6cd82331494192cbc16c3"} Jan 23 17:03:51 crc kubenswrapper[4606]: I0123 17:03:51.784212 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" event={"ID":"41aa3578-313b-4e05-a97c-778b0ad81093","Type":"ContainerStarted","Data":"9abfe2296bd3c1a4afdeff8fca1158a17271fab3ed3dd7ad483c3d0f3c2064bf"} Jan 23 17:03:53 crc kubenswrapper[4606]: I0123 17:03:53.701007 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wlpwz" podUID="767b5dce-2e44-401a-b7af-3cb97ff48f69" containerName="console" containerID="cri-o://84f4772f1832e0f838069fee98134d3f93ed262ef14fcf051711ed3264b99eca" gracePeriod=15 Jan 23 17:03:53 crc kubenswrapper[4606]: I0123 17:03:53.803023 4606 generic.go:334] "Generic (PLEG): container finished" podID="41aa3578-313b-4e05-a97c-778b0ad81093" containerID="69af3031addff6cd988fdf59cf173d502df4bcf32c008aa2bc4b7fb4c406f678" exitCode=0 Jan 23 17:03:53 crc kubenswrapper[4606]: I0123 17:03:53.803072 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" event={"ID":"41aa3578-313b-4e05-a97c-778b0ad81093","Type":"ContainerDied","Data":"69af3031addff6cd988fdf59cf173d502df4bcf32c008aa2bc4b7fb4c406f678"} Jan 23 17:03:54 crc kubenswrapper[4606]: I0123 17:03:54.813849 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wlpwz_767b5dce-2e44-401a-b7af-3cb97ff48f69/console/0.log" Jan 23 17:03:54 crc kubenswrapper[4606]: I0123 17:03:54.813905 4606 generic.go:334] "Generic (PLEG): container finished" podID="767b5dce-2e44-401a-b7af-3cb97ff48f69" containerID="84f4772f1832e0f838069fee98134d3f93ed262ef14fcf051711ed3264b99eca" exitCode=2 Jan 23 17:03:54 crc kubenswrapper[4606]: I0123 17:03:54.813937 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wlpwz" event={"ID":"767b5dce-2e44-401a-b7af-3cb97ff48f69","Type":"ContainerDied","Data":"84f4772f1832e0f838069fee98134d3f93ed262ef14fcf051711ed3264b99eca"} Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.625020 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wlpwz_767b5dce-2e44-401a-b7af-3cb97ff48f69/console/0.log" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.625502 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.803207 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-service-ca\") pod \"767b5dce-2e44-401a-b7af-3cb97ff48f69\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.803343 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-oauth-serving-cert\") pod \"767b5dce-2e44-401a-b7af-3cb97ff48f69\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.803434 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-trusted-ca-bundle\") pod \"767b5dce-2e44-401a-b7af-3cb97ff48f69\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.803471 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8426\" (UniqueName: \"kubernetes.io/projected/767b5dce-2e44-401a-b7af-3cb97ff48f69-kube-api-access-t8426\") pod \"767b5dce-2e44-401a-b7af-3cb97ff48f69\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.803533 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-serving-cert\") pod \"767b5dce-2e44-401a-b7af-3cb97ff48f69\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.803648 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-oauth-config\") pod \"767b5dce-2e44-401a-b7af-3cb97ff48f69\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.803719 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-config\") pod \"767b5dce-2e44-401a-b7af-3cb97ff48f69\" (UID: \"767b5dce-2e44-401a-b7af-3cb97ff48f69\") " Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.804189 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-service-ca" (OuterVolumeSpecName: "service-ca") pod "767b5dce-2e44-401a-b7af-3cb97ff48f69" (UID: "767b5dce-2e44-401a-b7af-3cb97ff48f69"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.804215 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "767b5dce-2e44-401a-b7af-3cb97ff48f69" (UID: "767b5dce-2e44-401a-b7af-3cb97ff48f69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.804720 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "767b5dce-2e44-401a-b7af-3cb97ff48f69" (UID: "767b5dce-2e44-401a-b7af-3cb97ff48f69"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.804747 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-config" (OuterVolumeSpecName: "console-config") pod "767b5dce-2e44-401a-b7af-3cb97ff48f69" (UID: "767b5dce-2e44-401a-b7af-3cb97ff48f69"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.809824 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767b5dce-2e44-401a-b7af-3cb97ff48f69-kube-api-access-t8426" (OuterVolumeSpecName: "kube-api-access-t8426") pod "767b5dce-2e44-401a-b7af-3cb97ff48f69" (UID: "767b5dce-2e44-401a-b7af-3cb97ff48f69"). InnerVolumeSpecName "kube-api-access-t8426". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.810115 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "767b5dce-2e44-401a-b7af-3cb97ff48f69" (UID: "767b5dce-2e44-401a-b7af-3cb97ff48f69"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.811311 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "767b5dce-2e44-401a-b7af-3cb97ff48f69" (UID: "767b5dce-2e44-401a-b7af-3cb97ff48f69"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.822846 4606 generic.go:334] "Generic (PLEG): container finished" podID="41aa3578-313b-4e05-a97c-778b0ad81093" containerID="199e93b94893b0236982922a2712471bf5d8e43c9436c55350c51eca32486791" exitCode=0 Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.822935 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" event={"ID":"41aa3578-313b-4e05-a97c-778b0ad81093","Type":"ContainerDied","Data":"199e93b94893b0236982922a2712471bf5d8e43c9436c55350c51eca32486791"} Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.826729 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wlpwz_767b5dce-2e44-401a-b7af-3cb97ff48f69/console/0.log" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.826814 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wlpwz" event={"ID":"767b5dce-2e44-401a-b7af-3cb97ff48f69","Type":"ContainerDied","Data":"6c0236bbc5128442f5a4413c16afd38848c30df5a67b13949662ddd2a63e529e"} Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.826884 4606 scope.go:117] "RemoveContainer" containerID="84f4772f1832e0f838069fee98134d3f93ed262ef14fcf051711ed3264b99eca" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.826892 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wlpwz" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.868832 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wlpwz"] Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.873222 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wlpwz"] Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.904799 4606 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.904827 4606 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.904837 4606 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.904845 4606 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.904856 4606 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/767b5dce-2e44-401a-b7af-3cb97ff48f69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.904864 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8426\" (UniqueName: \"kubernetes.io/projected/767b5dce-2e44-401a-b7af-3cb97ff48f69-kube-api-access-t8426\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:55 crc kubenswrapper[4606]: I0123 17:03:55.904871 4606 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/767b5dce-2e44-401a-b7af-3cb97ff48f69-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:56 crc kubenswrapper[4606]: I0123 17:03:56.372726 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767b5dce-2e44-401a-b7af-3cb97ff48f69" path="/var/lib/kubelet/pods/767b5dce-2e44-401a-b7af-3cb97ff48f69/volumes" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.091041 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.243405 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b78qn\" (UniqueName: \"kubernetes.io/projected/41aa3578-313b-4e05-a97c-778b0ad81093-kube-api-access-b78qn\") pod \"41aa3578-313b-4e05-a97c-778b0ad81093\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.243514 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-util\") pod \"41aa3578-313b-4e05-a97c-778b0ad81093\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.243544 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-bundle\") pod \"41aa3578-313b-4e05-a97c-778b0ad81093\" (UID: \"41aa3578-313b-4e05-a97c-778b0ad81093\") " Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.244553 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-bundle" (OuterVolumeSpecName: "bundle") pod "41aa3578-313b-4e05-a97c-778b0ad81093" (UID: "41aa3578-313b-4e05-a97c-778b0ad81093"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.246549 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41aa3578-313b-4e05-a97c-778b0ad81093-kube-api-access-b78qn" (OuterVolumeSpecName: "kube-api-access-b78qn") pod "41aa3578-313b-4e05-a97c-778b0ad81093" (UID: "41aa3578-313b-4e05-a97c-778b0ad81093"). InnerVolumeSpecName "kube-api-access-b78qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.253517 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-util" (OuterVolumeSpecName: "util") pod "41aa3578-313b-4e05-a97c-778b0ad81093" (UID: "41aa3578-313b-4e05-a97c-778b0ad81093"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.344676 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b78qn\" (UniqueName: \"kubernetes.io/projected/41aa3578-313b-4e05-a97c-778b0ad81093-kube-api-access-b78qn\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.344720 4606 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-util\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.344737 4606 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41aa3578-313b-4e05-a97c-778b0ad81093-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.853040 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" event={"ID":"41aa3578-313b-4e05-a97c-778b0ad81093","Type":"ContainerDied","Data":"9abfe2296bd3c1a4afdeff8fca1158a17271fab3ed3dd7ad483c3d0f3c2064bf"} Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.853082 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9abfe2296bd3c1a4afdeff8fca1158a17271fab3ed3dd7ad483c3d0f3c2064bf" Jan 23 17:03:57 crc kubenswrapper[4606]: I0123 17:03:57.853097 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.535130 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l"] Jan 23 17:04:05 crc kubenswrapper[4606]: E0123 17:04:05.535807 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767b5dce-2e44-401a-b7af-3cb97ff48f69" containerName="console" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.535819 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="767b5dce-2e44-401a-b7af-3cb97ff48f69" containerName="console" Jan 23 17:04:05 crc kubenswrapper[4606]: E0123 17:04:05.535829 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41aa3578-313b-4e05-a97c-778b0ad81093" containerName="pull" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.535835 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="41aa3578-313b-4e05-a97c-778b0ad81093" containerName="pull" Jan 23 17:04:05 crc kubenswrapper[4606]: E0123 17:04:05.535843 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41aa3578-313b-4e05-a97c-778b0ad81093" containerName="extract" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.535849 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="41aa3578-313b-4e05-a97c-778b0ad81093" containerName="extract" Jan 23 17:04:05 crc kubenswrapper[4606]: E0123 17:04:05.535857 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41aa3578-313b-4e05-a97c-778b0ad81093" containerName="util" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.535864 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="41aa3578-313b-4e05-a97c-778b0ad81093" containerName="util" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.535971 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="767b5dce-2e44-401a-b7af-3cb97ff48f69" containerName="console" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.535980 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="41aa3578-313b-4e05-a97c-778b0ad81093" containerName="extract" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.536358 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.537705 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.538218 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.538255 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9lpqn" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.539199 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.540234 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.543264 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghfl\" (UniqueName: \"kubernetes.io/projected/96e5558f-180a-4f7a-aee6-213fb576e016-kube-api-access-sghfl\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.543361 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96e5558f-180a-4f7a-aee6-213fb576e016-apiservice-cert\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.543419 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96e5558f-180a-4f7a-aee6-213fb576e016-webhook-cert\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.572502 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l"] Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.644664 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96e5558f-180a-4f7a-aee6-213fb576e016-apiservice-cert\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.644734 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96e5558f-180a-4f7a-aee6-213fb576e016-webhook-cert\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.644766 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghfl\" (UniqueName: \"kubernetes.io/projected/96e5558f-180a-4f7a-aee6-213fb576e016-kube-api-access-sghfl\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.650778 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/96e5558f-180a-4f7a-aee6-213fb576e016-apiservice-cert\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.656574 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/96e5558f-180a-4f7a-aee6-213fb576e016-webhook-cert\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.677469 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghfl\" (UniqueName: \"kubernetes.io/projected/96e5558f-180a-4f7a-aee6-213fb576e016-kube-api-access-sghfl\") pod \"metallb-operator-controller-manager-779944bfcf-5nq4l\" (UID: \"96e5558f-180a-4f7a-aee6-213fb576e016\") " pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.853925 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.938806 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx"] Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.939486 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.942044 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.942239 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.942361 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gw2ls" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.947000 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e92972f-9a8b-45b1-9a65-fb36b7467a66-apiservice-cert\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.947071 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9zr\" (UniqueName: \"kubernetes.io/projected/7e92972f-9a8b-45b1-9a65-fb36b7467a66-kube-api-access-9z9zr\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.947092 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e92972f-9a8b-45b1-9a65-fb36b7467a66-webhook-cert\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:05 crc kubenswrapper[4606]: I0123 17:04:05.959621 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx"] Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.048576 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e92972f-9a8b-45b1-9a65-fb36b7467a66-apiservice-cert\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.048966 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9zr\" (UniqueName: \"kubernetes.io/projected/7e92972f-9a8b-45b1-9a65-fb36b7467a66-kube-api-access-9z9zr\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.048994 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e92972f-9a8b-45b1-9a65-fb36b7467a66-webhook-cert\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.054116 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e92972f-9a8b-45b1-9a65-fb36b7467a66-webhook-cert\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.061492 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e92972f-9a8b-45b1-9a65-fb36b7467a66-apiservice-cert\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.072721 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9zr\" (UniqueName: \"kubernetes.io/projected/7e92972f-9a8b-45b1-9a65-fb36b7467a66-kube-api-access-9z9zr\") pod \"metallb-operator-webhook-server-5fd8f4cf46-r8plx\" (UID: \"7e92972f-9a8b-45b1-9a65-fb36b7467a66\") " pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.252513 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.447845 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l"] Jan 23 17:04:06 crc kubenswrapper[4606]: W0123 17:04:06.475255 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e5558f_180a_4f7a_aee6_213fb576e016.slice/crio-892450083c193d081355014526d1a482add5c947e549ea403e1baa31f46947c9 WatchSource:0}: Error finding container 892450083c193d081355014526d1a482add5c947e549ea403e1baa31f46947c9: Status 404 returned error can't find the container with id 892450083c193d081355014526d1a482add5c947e549ea403e1baa31f46947c9 Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.610551 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx"] Jan 23 17:04:06 crc kubenswrapper[4606]: W0123 17:04:06.614321 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e92972f_9a8b_45b1_9a65_fb36b7467a66.slice/crio-7dab63c5d89a22a44be31326456d23230c0f7218d7a8bb6922cf837054701b09 WatchSource:0}: Error finding container 7dab63c5d89a22a44be31326456d23230c0f7218d7a8bb6922cf837054701b09: Status 404 returned error can't find the container with id 7dab63c5d89a22a44be31326456d23230c0f7218d7a8bb6922cf837054701b09 Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.898735 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" event={"ID":"7e92972f-9a8b-45b1-9a65-fb36b7467a66","Type":"ContainerStarted","Data":"7dab63c5d89a22a44be31326456d23230c0f7218d7a8bb6922cf837054701b09"} Jan 23 17:04:06 crc kubenswrapper[4606]: I0123 17:04:06.900152 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" event={"ID":"96e5558f-180a-4f7a-aee6-213fb576e016","Type":"ContainerStarted","Data":"892450083c193d081355014526d1a482add5c947e549ea403e1baa31f46947c9"} Jan 23 17:04:10 crc kubenswrapper[4606]: I0123 17:04:10.925309 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" event={"ID":"96e5558f-180a-4f7a-aee6-213fb576e016","Type":"ContainerStarted","Data":"be18ecd4ee2a147eaf3c98e1de22e02afd4d2c51af55b7eb5b320622c5bd659b"} Jan 23 17:04:10 crc kubenswrapper[4606]: I0123 17:04:10.926870 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:10 crc kubenswrapper[4606]: I0123 17:04:10.950666 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" podStartSLOduration=1.9967046069999999 podStartE2EDuration="5.950638024s" podCreationTimestamp="2026-01-23 17:04:05 +0000 UTC" firstStartedPulling="2026-01-23 17:04:06.481755052 +0000 UTC m=+680.481947458" lastFinishedPulling="2026-01-23 17:04:10.435688469 +0000 UTC m=+684.435880875" observedRunningTime="2026-01-23 17:04:10.949009753 +0000 UTC m=+684.949202179" watchObservedRunningTime="2026-01-23 17:04:10.950638024 +0000 UTC m=+684.950830430" Jan 23 17:04:12 crc kubenswrapper[4606]: I0123 17:04:12.942409 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" event={"ID":"7e92972f-9a8b-45b1-9a65-fb36b7467a66","Type":"ContainerStarted","Data":"8524e0f2565ce7267724b9f1db16a49ab079a2d583763defb397c2c68dc17956"} Jan 23 17:04:12 crc kubenswrapper[4606]: I0123 17:04:12.964174 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" podStartSLOduration=2.323781722 podStartE2EDuration="7.964159138s" podCreationTimestamp="2026-01-23 17:04:05 +0000 UTC" firstStartedPulling="2026-01-23 17:04:06.617140129 +0000 UTC m=+680.617332525" lastFinishedPulling="2026-01-23 17:04:12.257517535 +0000 UTC m=+686.257709941" observedRunningTime="2026-01-23 17:04:12.961833299 +0000 UTC m=+686.962025705" watchObservedRunningTime="2026-01-23 17:04:12.964159138 +0000 UTC m=+686.964351544" Jan 23 17:04:13 crc kubenswrapper[4606]: I0123 17:04:13.947764 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:26 crc kubenswrapper[4606]: I0123 17:04:26.256933 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5fd8f4cf46-r8plx" Jan 23 17:04:45 crc kubenswrapper[4606]: I0123 17:04:45.858198 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.744644 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-v8pdx"] Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.780268 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn"] Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.780719 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.780911 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.783780 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn"] Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.792774 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.792833 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.793046 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5dlz9" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.793092 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.860868 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dw82b"] Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.861864 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dw82b" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.865944 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.866127 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.866233 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.866341 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v88zl" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881288 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-reloader\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881331 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a200ea3-85d2-4098-b6b9-33def10aaee1-metrics-certs\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881360 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-sockets\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881393 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00160898-cf32-4d05-8690-3dc95b50ca0e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd7jn\" (UID: \"00160898-cf32-4d05-8690-3dc95b50ca0e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881416 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742dl\" (UniqueName: \"kubernetes.io/projected/00160898-cf32-4d05-8690-3dc95b50ca0e-kube-api-access-742dl\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd7jn\" (UID: \"00160898-cf32-4d05-8690-3dc95b50ca0e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881449 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-metrics\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881496 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-conf\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881514 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-startup\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.881539 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfrj\" (UniqueName: \"kubernetes.io/projected/5a200ea3-85d2-4098-b6b9-33def10aaee1-kube-api-access-bgfrj\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.892474 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-p2mvr"] Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.893874 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.899052 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.908891 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p2mvr"] Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982340 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-reloader\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982378 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a200ea3-85d2-4098-b6b9-33def10aaee1-metrics-certs\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982400 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-sockets\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982424 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5kzj\" (UniqueName: \"kubernetes.io/projected/568d0f37-d19e-472e-bc6c-9f6471b5edd6-kube-api-access-z5kzj\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982444 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-metrics-certs\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982466 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00160898-cf32-4d05-8690-3dc95b50ca0e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd7jn\" (UID: \"00160898-cf32-4d05-8690-3dc95b50ca0e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982483 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-742dl\" (UniqueName: \"kubernetes.io/projected/00160898-cf32-4d05-8690-3dc95b50ca0e-kube-api-access-742dl\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd7jn\" (UID: \"00160898-cf32-4d05-8690-3dc95b50ca0e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982501 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982522 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-metrics\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982562 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-conf\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982576 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-startup\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982620 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/568d0f37-d19e-472e-bc6c-9f6471b5edd6-metallb-excludel2\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982640 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgfrj\" (UniqueName: \"kubernetes.io/projected/5a200ea3-85d2-4098-b6b9-33def10aaee1-kube-api-access-bgfrj\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.982875 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-reloader\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.983648 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-conf\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.983384 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-sockets\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.983948 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5a200ea3-85d2-4098-b6b9-33def10aaee1-metrics\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.984547 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5a200ea3-85d2-4098-b6b9-33def10aaee1-frr-startup\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.997220 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00160898-cf32-4d05-8690-3dc95b50ca0e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd7jn\" (UID: \"00160898-cf32-4d05-8690-3dc95b50ca0e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:46 crc kubenswrapper[4606]: I0123 17:04:46.997221 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a200ea3-85d2-4098-b6b9-33def10aaee1-metrics-certs\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.000901 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgfrj\" (UniqueName: \"kubernetes.io/projected/5a200ea3-85d2-4098-b6b9-33def10aaee1-kube-api-access-bgfrj\") pod \"frr-k8s-v8pdx\" (UID: \"5a200ea3-85d2-4098-b6b9-33def10aaee1\") " pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.001415 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-742dl\" (UniqueName: \"kubernetes.io/projected/00160898-cf32-4d05-8690-3dc95b50ca0e-kube-api-access-742dl\") pod \"frr-k8s-webhook-server-7df86c4f6c-wd7jn\" (UID: \"00160898-cf32-4d05-8690-3dc95b50ca0e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.083545 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kzj\" (UniqueName: \"kubernetes.io/projected/568d0f37-d19e-472e-bc6c-9f6471b5edd6-kube-api-access-z5kzj\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.083598 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-metrics-certs\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.083634 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrprd\" (UniqueName: \"kubernetes.io/projected/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-kube-api-access-mrprd\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.083656 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-cert\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.083670 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-metrics-certs\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.083687 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.083742 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/568d0f37-d19e-472e-bc6c-9f6471b5edd6-metallb-excludel2\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.084362 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/568d0f37-d19e-472e-bc6c-9f6471b5edd6-metallb-excludel2\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: E0123 17:04:47.084863 4606 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 17:04:47 crc kubenswrapper[4606]: E0123 17:04:47.084939 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist podName:568d0f37-d19e-472e-bc6c-9f6471b5edd6 nodeName:}" failed. No retries permitted until 2026-01-23 17:04:47.584920414 +0000 UTC m=+721.585112820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist") pod "speaker-dw82b" (UID: "568d0f37-d19e-472e-bc6c-9f6471b5edd6") : secret "metallb-memberlist" not found Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.087492 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-metrics-certs\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.102730 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.105273 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kzj\" (UniqueName: \"kubernetes.io/projected/568d0f37-d19e-472e-bc6c-9f6471b5edd6-kube-api-access-z5kzj\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.112158 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.185257 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrprd\" (UniqueName: \"kubernetes.io/projected/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-kube-api-access-mrprd\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.185291 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-cert\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.185309 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-metrics-certs\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.188980 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.190294 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-metrics-certs\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.199500 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-cert\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.206143 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrprd\" (UniqueName: \"kubernetes.io/projected/23f67545-2df3-4dd2-8a6f-6de84ddf3a20-kube-api-access-mrprd\") pod \"controller-6968d8fdc4-p2mvr\" (UID: \"23f67545-2df3-4dd2-8a6f-6de84ddf3a20\") " pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.209188 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.355455 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn"] Jan 23 17:04:47 crc kubenswrapper[4606]: W0123 17:04:47.363815 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00160898_cf32_4d05_8690_3dc95b50ca0e.slice/crio-562845510c11e6f0937b6e05931fbd503cfa0a07140759c23af12a2139082b93 WatchSource:0}: Error finding container 562845510c11e6f0937b6e05931fbd503cfa0a07140759c23af12a2139082b93: Status 404 returned error can't find the container with id 562845510c11e6f0937b6e05931fbd503cfa0a07140759c23af12a2139082b93 Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.589493 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:47 crc kubenswrapper[4606]: E0123 17:04:47.589684 4606 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 17:04:47 crc kubenswrapper[4606]: E0123 17:04:47.590133 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist podName:568d0f37-d19e-472e-bc6c-9f6471b5edd6 nodeName:}" failed. No retries permitted until 2026-01-23 17:04:48.590115684 +0000 UTC m=+722.590308090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist") pod "speaker-dw82b" (UID: "568d0f37-d19e-472e-bc6c-9f6471b5edd6") : secret "metallb-memberlist" not found Jan 23 17:04:47 crc kubenswrapper[4606]: I0123 17:04:47.630959 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-p2mvr"] Jan 23 17:04:47 crc kubenswrapper[4606]: W0123 17:04:47.636476 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23f67545_2df3_4dd2_8a6f_6de84ddf3a20.slice/crio-66f07c2946caabda43e5c1e9394f2dc1cb5cdedaf37a04b6e4c8a527e1a82e2f WatchSource:0}: Error finding container 66f07c2946caabda43e5c1e9394f2dc1cb5cdedaf37a04b6e4c8a527e1a82e2f: Status 404 returned error can't find the container with id 66f07c2946caabda43e5c1e9394f2dc1cb5cdedaf37a04b6e4c8a527e1a82e2f Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.139149 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerStarted","Data":"85ee4162dfe308a2333ae292e19a081c7cf64bfed0d66a68ea3a77de4ad8c925"} Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.141216 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p2mvr" event={"ID":"23f67545-2df3-4dd2-8a6f-6de84ddf3a20","Type":"ContainerStarted","Data":"7834cea13c24c25d97cf3111676c5d871eb4956c82d2a6e5213f90a1b4049df9"} Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.141261 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p2mvr" event={"ID":"23f67545-2df3-4dd2-8a6f-6de84ddf3a20","Type":"ContainerStarted","Data":"6c80c2e1d6f08108eab7ca79d03fbbef435057e72662b11ee30add9948b0c5a1"} Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.141282 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-p2mvr" event={"ID":"23f67545-2df3-4dd2-8a6f-6de84ddf3a20","Type":"ContainerStarted","Data":"66f07c2946caabda43e5c1e9394f2dc1cb5cdedaf37a04b6e4c8a527e1a82e2f"} Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.141764 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.143005 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" event={"ID":"00160898-cf32-4d05-8690-3dc95b50ca0e","Type":"ContainerStarted","Data":"562845510c11e6f0937b6e05931fbd503cfa0a07140759c23af12a2139082b93"} Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.168721 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-p2mvr" podStartSLOduration=2.168699123 podStartE2EDuration="2.168699123s" podCreationTimestamp="2026-01-23 17:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:04:48.166561089 +0000 UTC m=+722.166753525" watchObservedRunningTime="2026-01-23 17:04:48.168699123 +0000 UTC m=+722.168891569" Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.603115 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.623278 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/568d0f37-d19e-472e-bc6c-9f6471b5edd6-memberlist\") pod \"speaker-dw82b\" (UID: \"568d0f37-d19e-472e-bc6c-9f6471b5edd6\") " pod="metallb-system/speaker-dw82b" Jan 23 17:04:48 crc kubenswrapper[4606]: I0123 17:04:48.679940 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dw82b" Jan 23 17:04:49 crc kubenswrapper[4606]: I0123 17:04:49.153800 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dw82b" event={"ID":"568d0f37-d19e-472e-bc6c-9f6471b5edd6","Type":"ContainerStarted","Data":"ae6059e2d1617b165bdb3f169281488242bf3f56c7eb8504573e38428e8df4bb"} Jan 23 17:04:49 crc kubenswrapper[4606]: I0123 17:04:49.154124 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dw82b" event={"ID":"568d0f37-d19e-472e-bc6c-9f6471b5edd6","Type":"ContainerStarted","Data":"4b3ca1cd4ad2777e4a3c8e128242c68355da8a72460358caddd1a649de7af037"} Jan 23 17:04:50 crc kubenswrapper[4606]: I0123 17:04:50.167349 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dw82b" event={"ID":"568d0f37-d19e-472e-bc6c-9f6471b5edd6","Type":"ContainerStarted","Data":"47dd858b3fe35fc92ad28a35f8e61769d5fbe4db29b60d43743451fd4bc4674b"} Jan 23 17:04:50 crc kubenswrapper[4606]: I0123 17:04:50.167596 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dw82b" Jan 23 17:04:56 crc kubenswrapper[4606]: I0123 17:04:56.208449 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" event={"ID":"00160898-cf32-4d05-8690-3dc95b50ca0e","Type":"ContainerStarted","Data":"8dacb99337a54d2a4c76f8badf9423ca0ae850bc366600cca4ffdacafad762ab"} Jan 23 17:04:56 crc kubenswrapper[4606]: I0123 17:04:56.208966 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:04:56 crc kubenswrapper[4606]: I0123 17:04:56.209974 4606 generic.go:334] "Generic (PLEG): container finished" podID="5a200ea3-85d2-4098-b6b9-33def10aaee1" containerID="cc14a9f10a5dac80de658065ab426f81b56ebda1f1c85aeab2198b5e45abec32" exitCode=0 Jan 23 17:04:56 crc kubenswrapper[4606]: I0123 17:04:56.210003 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerDied","Data":"cc14a9f10a5dac80de658065ab426f81b56ebda1f1c85aeab2198b5e45abec32"} Jan 23 17:04:56 crc kubenswrapper[4606]: I0123 17:04:56.228647 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dw82b" podStartSLOduration=10.228602115 podStartE2EDuration="10.228602115s" podCreationTimestamp="2026-01-23 17:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:04:50.222806672 +0000 UTC m=+724.222999088" watchObservedRunningTime="2026-01-23 17:04:56.228602115 +0000 UTC m=+730.228794531" Jan 23 17:04:56 crc kubenswrapper[4606]: I0123 17:04:56.229020 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" podStartSLOduration=2.541619351 podStartE2EDuration="10.229012833s" podCreationTimestamp="2026-01-23 17:04:46 +0000 UTC" firstStartedPulling="2026-01-23 17:04:47.365642744 +0000 UTC m=+721.365835150" lastFinishedPulling="2026-01-23 17:04:55.053036206 +0000 UTC m=+729.053228632" observedRunningTime="2026-01-23 17:04:56.225494002 +0000 UTC m=+730.225686418" watchObservedRunningTime="2026-01-23 17:04:56.229012833 +0000 UTC m=+730.229205239" Jan 23 17:04:57 crc kubenswrapper[4606]: I0123 17:04:57.217794 4606 generic.go:334] "Generic (PLEG): container finished" podID="5a200ea3-85d2-4098-b6b9-33def10aaee1" containerID="f7cfa998b21f6f093f195e2e8fb6fdeec82d1a1f5174c5f6b899a247148cdaca" exitCode=0 Jan 23 17:04:57 crc kubenswrapper[4606]: I0123 17:04:57.219037 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerDied","Data":"f7cfa998b21f6f093f195e2e8fb6fdeec82d1a1f5174c5f6b899a247148cdaca"} Jan 23 17:04:57 crc kubenswrapper[4606]: I0123 17:04:57.225028 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-p2mvr" Jan 23 17:04:58 crc kubenswrapper[4606]: I0123 17:04:58.227149 4606 generic.go:334] "Generic (PLEG): container finished" podID="5a200ea3-85d2-4098-b6b9-33def10aaee1" containerID="d93db5a738728db7f277976e25e935c616bd42a9bd0fbd14f33cd1c9b3dbf81b" exitCode=0 Jan 23 17:04:58 crc kubenswrapper[4606]: I0123 17:04:58.227209 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerDied","Data":"d93db5a738728db7f277976e25e935c616bd42a9bd0fbd14f33cd1c9b3dbf81b"} Jan 23 17:04:58 crc kubenswrapper[4606]: I0123 17:04:58.686219 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dw82b" Jan 23 17:04:59 crc kubenswrapper[4606]: I0123 17:04:59.235942 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerStarted","Data":"199882dc4e38de649f734b3a87fd274f0a1ae216e69469e09fc44a5f39384e5c"} Jan 23 17:04:59 crc kubenswrapper[4606]: I0123 17:04:59.235980 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerStarted","Data":"a3b69a3c81b4d3f722871f70eba1faf877c3216f18528cbdc30161476d2a193a"} Jan 23 17:04:59 crc kubenswrapper[4606]: I0123 17:04:59.235992 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerStarted","Data":"af06d06032d1b22a6291151510c27525a566f65889f20cee6457d3c9fb7e9717"} Jan 23 17:04:59 crc kubenswrapper[4606]: I0123 17:04:59.236001 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerStarted","Data":"1630d3b45efbb893286256ee927f7c886372aa3bbc5697c18564a3f22d78e7f3"} Jan 23 17:04:59 crc kubenswrapper[4606]: I0123 17:04:59.236010 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerStarted","Data":"ce1e1a60c70e0397e229e7c48043838435fa603a0416e10eac7e36c528a49d2a"} Jan 23 17:05:00 crc kubenswrapper[4606]: I0123 17:05:00.244787 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-v8pdx" event={"ID":"5a200ea3-85d2-4098-b6b9-33def10aaee1","Type":"ContainerStarted","Data":"10b712ed3824f72714be1f00604703137b4100fe4348e6ce91659ea330fa82b5"} Jan 23 17:05:00 crc kubenswrapper[4606]: I0123 17:05:00.245128 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:05:00 crc kubenswrapper[4606]: I0123 17:05:00.271368 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-v8pdx" podStartSLOduration=6.45943923 podStartE2EDuration="14.271344796s" podCreationTimestamp="2026-01-23 17:04:46 +0000 UTC" firstStartedPulling="2026-01-23 17:04:47.25938556 +0000 UTC m=+721.259577956" lastFinishedPulling="2026-01-23 17:04:55.071291106 +0000 UTC m=+729.071483522" observedRunningTime="2026-01-23 17:05:00.26512851 +0000 UTC m=+734.265320956" watchObservedRunningTime="2026-01-23 17:05:00.271344796 +0000 UTC m=+734.271537222" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.103507 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.137859 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f8j4h"] Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.138554 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f8j4h" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.140447 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wfjfp" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.140461 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.145140 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.157398 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f8j4h"] Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.197797 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.287050 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mqt\" (UniqueName: \"kubernetes.io/projected/6868bbb9-5be0-4c25-8da6-33ba8d06eb34-kube-api-access-k2mqt\") pod \"openstack-operator-index-f8j4h\" (UID: \"6868bbb9-5be0-4c25-8da6-33ba8d06eb34\") " pod="openstack-operators/openstack-operator-index-f8j4h" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.387943 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mqt\" (UniqueName: \"kubernetes.io/projected/6868bbb9-5be0-4c25-8da6-33ba8d06eb34-kube-api-access-k2mqt\") pod \"openstack-operator-index-f8j4h\" (UID: \"6868bbb9-5be0-4c25-8da6-33ba8d06eb34\") " pod="openstack-operators/openstack-operator-index-f8j4h" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.406790 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mqt\" (UniqueName: \"kubernetes.io/projected/6868bbb9-5be0-4c25-8da6-33ba8d06eb34-kube-api-access-k2mqt\") pod \"openstack-operator-index-f8j4h\" (UID: \"6868bbb9-5be0-4c25-8da6-33ba8d06eb34\") " pod="openstack-operators/openstack-operator-index-f8j4h" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.458823 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f8j4h" Jan 23 17:05:02 crc kubenswrapper[4606]: I0123 17:05:02.874655 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f8j4h"] Jan 23 17:05:03 crc kubenswrapper[4606]: I0123 17:05:03.267266 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f8j4h" event={"ID":"6868bbb9-5be0-4c25-8da6-33ba8d06eb34","Type":"ContainerStarted","Data":"e0aa7ed0cd8c24af3e1eb0de17ea9ce409cd4e7bcec1e5d06f15604982f33857"} Jan 23 17:05:05 crc kubenswrapper[4606]: I0123 17:05:05.281189 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f8j4h" event={"ID":"6868bbb9-5be0-4c25-8da6-33ba8d06eb34","Type":"ContainerStarted","Data":"5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19"} Jan 23 17:05:05 crc kubenswrapper[4606]: I0123 17:05:05.303819 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f8j4h" podStartSLOduration=1.340214236 podStartE2EDuration="3.303798719s" podCreationTimestamp="2026-01-23 17:05:02 +0000 UTC" firstStartedPulling="2026-01-23 17:05:02.885302324 +0000 UTC m=+736.885494730" lastFinishedPulling="2026-01-23 17:05:04.848886807 +0000 UTC m=+738.849079213" observedRunningTime="2026-01-23 17:05:05.300288137 +0000 UTC m=+739.300480553" watchObservedRunningTime="2026-01-23 17:05:05.303798719 +0000 UTC m=+739.303991135" Jan 23 17:05:05 crc kubenswrapper[4606]: I0123 17:05:05.508228 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f8j4h"] Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.109653 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tmgzc"] Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.110404 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.134364 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tmgzc"] Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.239792 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4f4\" (UniqueName: \"kubernetes.io/projected/9ebd0588-56a8-4c9b-a902-5e1cdbac74c1-kube-api-access-wz4f4\") pod \"openstack-operator-index-tmgzc\" (UID: \"9ebd0588-56a8-4c9b-a902-5e1cdbac74c1\") " pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.341274 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz4f4\" (UniqueName: \"kubernetes.io/projected/9ebd0588-56a8-4c9b-a902-5e1cdbac74c1-kube-api-access-wz4f4\") pod \"openstack-operator-index-tmgzc\" (UID: \"9ebd0588-56a8-4c9b-a902-5e1cdbac74c1\") " pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.358841 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz4f4\" (UniqueName: \"kubernetes.io/projected/9ebd0588-56a8-4c9b-a902-5e1cdbac74c1-kube-api-access-wz4f4\") pod \"openstack-operator-index-tmgzc\" (UID: \"9ebd0588-56a8-4c9b-a902-5e1cdbac74c1\") " pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.430114 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:06 crc kubenswrapper[4606]: I0123 17:05:06.857020 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tmgzc"] Jan 23 17:05:06 crc kubenswrapper[4606]: W0123 17:05:06.872210 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ebd0588_56a8_4c9b_a902_5e1cdbac74c1.slice/crio-b6de41d6103fd6ab3480c35d9609cd8eeefebee40441a82d0b2d32686bd902ab WatchSource:0}: Error finding container b6de41d6103fd6ab3480c35d9609cd8eeefebee40441a82d0b2d32686bd902ab: Status 404 returned error can't find the container with id b6de41d6103fd6ab3480c35d9609cd8eeefebee40441a82d0b2d32686bd902ab Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.122010 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.292329 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tmgzc" event={"ID":"9ebd0588-56a8-4c9b-a902-5e1cdbac74c1","Type":"ContainerStarted","Data":"64163625c7300c11982cca2ef87cc625dca1e9d70de9c888701a2010bbbbcbdc"} Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.292404 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tmgzc" event={"ID":"9ebd0588-56a8-4c9b-a902-5e1cdbac74c1","Type":"ContainerStarted","Data":"b6de41d6103fd6ab3480c35d9609cd8eeefebee40441a82d0b2d32686bd902ab"} Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.292411 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-f8j4h" podUID="6868bbb9-5be0-4c25-8da6-33ba8d06eb34" containerName="registry-server" containerID="cri-o://5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19" gracePeriod=2 Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.340936 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tmgzc" podStartSLOduration=1.167785944 podStartE2EDuration="1.340916133s" podCreationTimestamp="2026-01-23 17:05:06 +0000 UTC" firstStartedPulling="2026-01-23 17:05:06.87851326 +0000 UTC m=+740.878705666" lastFinishedPulling="2026-01-23 17:05:07.051643449 +0000 UTC m=+741.051835855" observedRunningTime="2026-01-23 17:05:07.326781857 +0000 UTC m=+741.326974263" watchObservedRunningTime="2026-01-23 17:05:07.340916133 +0000 UTC m=+741.341108539" Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.657858 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f8j4h" Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.762412 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2mqt\" (UniqueName: \"kubernetes.io/projected/6868bbb9-5be0-4c25-8da6-33ba8d06eb34-kube-api-access-k2mqt\") pod \"6868bbb9-5be0-4c25-8da6-33ba8d06eb34\" (UID: \"6868bbb9-5be0-4c25-8da6-33ba8d06eb34\") " Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.766764 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6868bbb9-5be0-4c25-8da6-33ba8d06eb34-kube-api-access-k2mqt" (OuterVolumeSpecName: "kube-api-access-k2mqt") pod "6868bbb9-5be0-4c25-8da6-33ba8d06eb34" (UID: "6868bbb9-5be0-4c25-8da6-33ba8d06eb34"). InnerVolumeSpecName "kube-api-access-k2mqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:05:07 crc kubenswrapper[4606]: I0123 17:05:07.864859 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2mqt\" (UniqueName: \"kubernetes.io/projected/6868bbb9-5be0-4c25-8da6-33ba8d06eb34-kube-api-access-k2mqt\") on node \"crc\" DevicePath \"\"" Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.299323 4606 generic.go:334] "Generic (PLEG): container finished" podID="6868bbb9-5be0-4c25-8da6-33ba8d06eb34" containerID="5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19" exitCode=0 Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.299970 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f8j4h" Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.302698 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f8j4h" event={"ID":"6868bbb9-5be0-4c25-8da6-33ba8d06eb34","Type":"ContainerDied","Data":"5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19"} Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.302740 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f8j4h" event={"ID":"6868bbb9-5be0-4c25-8da6-33ba8d06eb34","Type":"ContainerDied","Data":"e0aa7ed0cd8c24af3e1eb0de17ea9ce409cd4e7bcec1e5d06f15604982f33857"} Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.302760 4606 scope.go:117] "RemoveContainer" containerID="5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19" Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.322168 4606 scope.go:117] "RemoveContainer" containerID="5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19" Jan 23 17:05:08 crc kubenswrapper[4606]: E0123 17:05:08.322723 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19\": container with ID starting with 5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19 not found: ID does not exist" containerID="5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19" Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.322752 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19"} err="failed to get container status \"5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19\": rpc error: code = NotFound desc = could not find container \"5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19\": container with ID starting with 5a30009adb9148227e31360d32773a2b268f92f7b6f2e550827b73acce5bdd19 not found: ID does not exist" Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.334549 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f8j4h"] Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.339405 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-f8j4h"] Jan 23 17:05:08 crc kubenswrapper[4606]: I0123 17:05:08.376696 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6868bbb9-5be0-4c25-8da6-33ba8d06eb34" path="/var/lib/kubelet/pods/6868bbb9-5be0-4c25-8da6-33ba8d06eb34/volumes" Jan 23 17:05:16 crc kubenswrapper[4606]: I0123 17:05:16.431039 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:16 crc kubenswrapper[4606]: I0123 17:05:16.431769 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:16 crc kubenswrapper[4606]: I0123 17:05:16.464818 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:17 crc kubenswrapper[4606]: I0123 17:05:17.106673 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-v8pdx" Jan 23 17:05:17 crc kubenswrapper[4606]: I0123 17:05:17.387502 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tmgzc" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.180335 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4"] Jan 23 17:05:23 crc kubenswrapper[4606]: E0123 17:05:23.180841 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6868bbb9-5be0-4c25-8da6-33ba8d06eb34" containerName="registry-server" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.180856 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="6868bbb9-5be0-4c25-8da6-33ba8d06eb34" containerName="registry-server" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.180969 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="6868bbb9-5be0-4c25-8da6-33ba8d06eb34" containerName="registry-server" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.181756 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.185787 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6tfqg" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.196910 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4"] Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.296412 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-util\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.296664 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn95w\" (UniqueName: \"kubernetes.io/projected/a98a86a4-3b6c-4d97-813e-bedfb701042d-kube-api-access-dn95w\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.296703 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-bundle\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.398444 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-util\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.398804 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn95w\" (UniqueName: \"kubernetes.io/projected/a98a86a4-3b6c-4d97-813e-bedfb701042d-kube-api-access-dn95w\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.398912 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-bundle\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.399273 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-util\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.399580 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-bundle\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.432270 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn95w\" (UniqueName: \"kubernetes.io/projected/a98a86a4-3b6c-4d97-813e-bedfb701042d-kube-api-access-dn95w\") pod \"286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.498686 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:23 crc kubenswrapper[4606]: I0123 17:05:23.745298 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4"] Jan 23 17:05:24 crc kubenswrapper[4606]: I0123 17:05:24.425022 4606 generic.go:334] "Generic (PLEG): container finished" podID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerID="c9468e6ea0019713281f6b707cb98dffe92d955155eb03579e5d6f56a94d6296" exitCode=0 Jan 23 17:05:24 crc kubenswrapper[4606]: I0123 17:05:24.425138 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" event={"ID":"a98a86a4-3b6c-4d97-813e-bedfb701042d","Type":"ContainerDied","Data":"c9468e6ea0019713281f6b707cb98dffe92d955155eb03579e5d6f56a94d6296"} Jan 23 17:05:24 crc kubenswrapper[4606]: I0123 17:05:24.425762 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" event={"ID":"a98a86a4-3b6c-4d97-813e-bedfb701042d","Type":"ContainerStarted","Data":"b5adeb4543e02df95a37e23f0bc6cd78adf29545ab51274489e5c2e83fab3f32"} Jan 23 17:05:25 crc kubenswrapper[4606]: I0123 17:05:25.433741 4606 generic.go:334] "Generic (PLEG): container finished" podID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerID="a5bafe1fae3f32817a44f1a728e2340615c22766813bc44e43343cb28b19dff4" exitCode=0 Jan 23 17:05:25 crc kubenswrapper[4606]: I0123 17:05:25.434203 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" event={"ID":"a98a86a4-3b6c-4d97-813e-bedfb701042d","Type":"ContainerDied","Data":"a5bafe1fae3f32817a44f1a728e2340615c22766813bc44e43343cb28b19dff4"} Jan 23 17:05:26 crc kubenswrapper[4606]: I0123 17:05:26.444448 4606 generic.go:334] "Generic (PLEG): container finished" podID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerID="392798514b25abe98f5c9c0a4551c975d2c2012804233345e1fc82ff30ab1217" exitCode=0 Jan 23 17:05:26 crc kubenswrapper[4606]: I0123 17:05:26.444541 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" event={"ID":"a98a86a4-3b6c-4d97-813e-bedfb701042d","Type":"ContainerDied","Data":"392798514b25abe98f5c9c0a4551c975d2c2012804233345e1fc82ff30ab1217"} Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.790960 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.961336 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-util\") pod \"a98a86a4-3b6c-4d97-813e-bedfb701042d\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.961381 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-bundle\") pod \"a98a86a4-3b6c-4d97-813e-bedfb701042d\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.962120 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-bundle" (OuterVolumeSpecName: "bundle") pod "a98a86a4-3b6c-4d97-813e-bedfb701042d" (UID: "a98a86a4-3b6c-4d97-813e-bedfb701042d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.962244 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn95w\" (UniqueName: \"kubernetes.io/projected/a98a86a4-3b6c-4d97-813e-bedfb701042d-kube-api-access-dn95w\") pod \"a98a86a4-3b6c-4d97-813e-bedfb701042d\" (UID: \"a98a86a4-3b6c-4d97-813e-bedfb701042d\") " Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.963234 4606 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.969397 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a98a86a4-3b6c-4d97-813e-bedfb701042d-kube-api-access-dn95w" (OuterVolumeSpecName: "kube-api-access-dn95w") pod "a98a86a4-3b6c-4d97-813e-bedfb701042d" (UID: "a98a86a4-3b6c-4d97-813e-bedfb701042d"). InnerVolumeSpecName "kube-api-access-dn95w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:05:27 crc kubenswrapper[4606]: I0123 17:05:27.974667 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-util" (OuterVolumeSpecName: "util") pod "a98a86a4-3b6c-4d97-813e-bedfb701042d" (UID: "a98a86a4-3b6c-4d97-813e-bedfb701042d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:05:28 crc kubenswrapper[4606]: I0123 17:05:28.065046 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn95w\" (UniqueName: \"kubernetes.io/projected/a98a86a4-3b6c-4d97-813e-bedfb701042d-kube-api-access-dn95w\") on node \"crc\" DevicePath \"\"" Jan 23 17:05:28 crc kubenswrapper[4606]: I0123 17:05:28.065092 4606 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a98a86a4-3b6c-4d97-813e-bedfb701042d-util\") on node \"crc\" DevicePath \"\"" Jan 23 17:05:28 crc kubenswrapper[4606]: I0123 17:05:28.462967 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" event={"ID":"a98a86a4-3b6c-4d97-813e-bedfb701042d","Type":"ContainerDied","Data":"b5adeb4543e02df95a37e23f0bc6cd78adf29545ab51274489e5c2e83fab3f32"} Jan 23 17:05:28 crc kubenswrapper[4606]: I0123 17:05:28.463015 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5adeb4543e02df95a37e23f0bc6cd78adf29545ab51274489e5c2e83fab3f32" Jan 23 17:05:28 crc kubenswrapper[4606]: I0123 17:05:28.463059 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4" Jan 23 17:05:28 crc kubenswrapper[4606]: I0123 17:05:28.525566 4606 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.234187 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb"] Jan 23 17:05:30 crc kubenswrapper[4606]: E0123 17:05:30.235366 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerName="util" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.235451 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerName="util" Jan 23 17:05:30 crc kubenswrapper[4606]: E0123 17:05:30.235526 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerName="extract" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.235613 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerName="extract" Jan 23 17:05:30 crc kubenswrapper[4606]: E0123 17:05:30.235699 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerName="pull" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.235864 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerName="pull" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.236065 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a98a86a4-3b6c-4d97-813e-bedfb701042d" containerName="extract" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.236678 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.239467 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-q4j7f" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.299712 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb"] Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.397130 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gsfw\" (UniqueName: \"kubernetes.io/projected/1f333c20-09e6-410a-88d1-b2e6bfc73768-kube-api-access-4gsfw\") pod \"openstack-operator-controller-init-8f4d99f8c-6x2sb\" (UID: \"1f333c20-09e6-410a-88d1-b2e6bfc73768\") " pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.498417 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gsfw\" (UniqueName: \"kubernetes.io/projected/1f333c20-09e6-410a-88d1-b2e6bfc73768-kube-api-access-4gsfw\") pod \"openstack-operator-controller-init-8f4d99f8c-6x2sb\" (UID: \"1f333c20-09e6-410a-88d1-b2e6bfc73768\") " pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.515493 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gsfw\" (UniqueName: \"kubernetes.io/projected/1f333c20-09e6-410a-88d1-b2e6bfc73768-kube-api-access-4gsfw\") pod \"openstack-operator-controller-init-8f4d99f8c-6x2sb\" (UID: \"1f333c20-09e6-410a-88d1-b2e6bfc73768\") " pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.553546 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:05:30 crc kubenswrapper[4606]: I0123 17:05:30.995493 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb"] Jan 23 17:05:31 crc kubenswrapper[4606]: I0123 17:05:31.483057 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" event={"ID":"1f333c20-09e6-410a-88d1-b2e6bfc73768","Type":"ContainerStarted","Data":"ebabc7a473d4fc2de5f5f0c0e7cce8543375b784e8679e35683b1c946e7f4429"} Jan 23 17:05:35 crc kubenswrapper[4606]: I0123 17:05:35.525288 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" event={"ID":"1f333c20-09e6-410a-88d1-b2e6bfc73768","Type":"ContainerStarted","Data":"5c4b463f064388c72e876d748c97ee07566fee10b767c2a4200d372eec4af13a"} Jan 23 17:05:35 crc kubenswrapper[4606]: I0123 17:05:35.526664 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:05:35 crc kubenswrapper[4606]: I0123 17:05:35.568882 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" podStartSLOduration=1.489579262 podStartE2EDuration="5.568864173s" podCreationTimestamp="2026-01-23 17:05:30 +0000 UTC" firstStartedPulling="2026-01-23 17:05:31.0162412 +0000 UTC m=+765.016433606" lastFinishedPulling="2026-01-23 17:05:35.095526111 +0000 UTC m=+769.095718517" observedRunningTime="2026-01-23 17:05:35.563090297 +0000 UTC m=+769.563282703" watchObservedRunningTime="2026-01-23 17:05:35.568864173 +0000 UTC m=+769.569056579" Jan 23 17:05:40 crc kubenswrapper[4606]: I0123 17:05:40.556018 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.293922 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.295294 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.297478 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-65vwn" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.306231 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.307327 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.309832 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pnqkc" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.312360 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.325298 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.330623 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.331438 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.334132 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5lnmm" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.361847 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.377763 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.386076 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xwkxj" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.421879 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.421921 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.473443 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.474251 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.478285 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.479272 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.481244 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s482\" (UniqueName: \"kubernetes.io/projected/c58688d9-32d0-4a57-a216-1110af3dd7dc-kube-api-access-8s482\") pod \"glance-operator-controller-manager-78fdd796fd-qxt8l\" (UID: \"c58688d9-32d0-4a57-a216-1110af3dd7dc\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.481330 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw965\" (UniqueName: \"kubernetes.io/projected/9ca388c9-7c19-4efb-93da-2d889c972fa7-kube-api-access-rw965\") pod \"designate-operator-controller-manager-b45d7bf98-wbg8x\" (UID: \"9ca388c9-7c19-4efb-93da-2d889c972fa7\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.481356 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm4lq\" (UniqueName: \"kubernetes.io/projected/6ab09cf8-68a3-4fac-aa7b-824a2626b3db-kube-api-access-hm4lq\") pod \"cinder-operator-controller-manager-69cf5d4557-f6w2s\" (UID: \"6ab09cf8-68a3-4fac-aa7b-824a2626b3db\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.481377 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdc7d\" (UniqueName: \"kubernetes.io/projected/db44bfcf-35b0-4cd1-baef-f9e1879e863d-kube-api-access-gdc7d\") pod \"barbican-operator-controller-manager-7f86f8796f-7c96f\" (UID: \"db44bfcf-35b0-4cd1-baef-f9e1879e863d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.483869 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cv87c" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.486865 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.487785 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.493324 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j4bws" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.493653 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wzgq4" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.513663 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.538669 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.539549 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.549842 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.550044 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vjw5l" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.572246 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.583391 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhrsz\" (UniqueName: \"kubernetes.io/projected/9b69bbaa-8277-48b5-b0aa-f8d7b7fec647-kube-api-access-rhrsz\") pod \"heat-operator-controller-manager-594c8c9d5d-gzdtf\" (UID: \"9b69bbaa-8277-48b5-b0aa-f8d7b7fec647\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.583501 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmd7r\" (UniqueName: \"kubernetes.io/projected/7671c11a-cd18-4409-addc-caeb99b22cae-kube-api-access-kmd7r\") pod \"ironic-operator-controller-manager-598f7747c9-6lkpf\" (UID: \"7671c11a-cd18-4409-addc-caeb99b22cae\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.583564 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw965\" (UniqueName: \"kubernetes.io/projected/9ca388c9-7c19-4efb-93da-2d889c972fa7-kube-api-access-rw965\") pod \"designate-operator-controller-manager-b45d7bf98-wbg8x\" (UID: \"9ca388c9-7c19-4efb-93da-2d889c972fa7\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.583622 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm4lq\" (UniqueName: \"kubernetes.io/projected/6ab09cf8-68a3-4fac-aa7b-824a2626b3db-kube-api-access-hm4lq\") pod \"cinder-operator-controller-manager-69cf5d4557-f6w2s\" (UID: \"6ab09cf8-68a3-4fac-aa7b-824a2626b3db\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.583656 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdc7d\" (UniqueName: \"kubernetes.io/projected/db44bfcf-35b0-4cd1-baef-f9e1879e863d-kube-api-access-gdc7d\") pod \"barbican-operator-controller-manager-7f86f8796f-7c96f\" (UID: \"db44bfcf-35b0-4cd1-baef-f9e1879e863d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.583701 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82j85\" (UniqueName: \"kubernetes.io/projected/7e33ab4c-7c48-478a-ae66-119b3ba39b19-kube-api-access-82j85\") pod \"horizon-operator-controller-manager-77d5c5b54f-6hgqs\" (UID: \"7e33ab4c-7c48-478a-ae66-119b3ba39b19\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.583730 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s482\" (UniqueName: \"kubernetes.io/projected/c58688d9-32d0-4a57-a216-1110af3dd7dc-kube-api-access-8s482\") pod \"glance-operator-controller-manager-78fdd796fd-qxt8l\" (UID: \"c58688d9-32d0-4a57-a216-1110af3dd7dc\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.591756 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.624113 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.624914 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.627411 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdc7d\" (UniqueName: \"kubernetes.io/projected/db44bfcf-35b0-4cd1-baef-f9e1879e863d-kube-api-access-gdc7d\") pod \"barbican-operator-controller-manager-7f86f8796f-7c96f\" (UID: \"db44bfcf-35b0-4cd1-baef-f9e1879e863d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.634309 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.634405 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.634853 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.636746 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zhmfb" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.668079 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.669042 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.682435 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw965\" (UniqueName: \"kubernetes.io/projected/9ca388c9-7c19-4efb-93da-2d889c972fa7-kube-api-access-rw965\") pod \"designate-operator-controller-manager-b45d7bf98-wbg8x\" (UID: \"9ca388c9-7c19-4efb-93da-2d889c972fa7\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.684225 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.684462 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s482\" (UniqueName: \"kubernetes.io/projected/c58688d9-32d0-4a57-a216-1110af3dd7dc-kube-api-access-8s482\") pod \"glance-operator-controller-manager-78fdd796fd-qxt8l\" (UID: \"c58688d9-32d0-4a57-a216-1110af3dd7dc\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.685133 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhrsz\" (UniqueName: \"kubernetes.io/projected/9b69bbaa-8277-48b5-b0aa-f8d7b7fec647-kube-api-access-rhrsz\") pod \"heat-operator-controller-manager-594c8c9d5d-gzdtf\" (UID: \"9b69bbaa-8277-48b5-b0aa-f8d7b7fec647\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.685180 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.685231 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmd7r\" (UniqueName: \"kubernetes.io/projected/7671c11a-cd18-4409-addc-caeb99b22cae-kube-api-access-kmd7r\") pod \"ironic-operator-controller-manager-598f7747c9-6lkpf\" (UID: \"7671c11a-cd18-4409-addc-caeb99b22cae\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.685289 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fp5t\" (UniqueName: \"kubernetes.io/projected/194942ca-b627-42a1-9491-9e00b44d51d7-kube-api-access-7fp5t\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.685334 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82j85\" (UniqueName: \"kubernetes.io/projected/7e33ab4c-7c48-478a-ae66-119b3ba39b19-kube-api-access-82j85\") pod \"horizon-operator-controller-manager-77d5c5b54f-6hgqs\" (UID: \"7e33ab4c-7c48-478a-ae66-119b3ba39b19\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.685646 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.686186 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qt8n2" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.690870 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xzzh6" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.725400 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.731112 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm4lq\" (UniqueName: \"kubernetes.io/projected/6ab09cf8-68a3-4fac-aa7b-824a2626b3db-kube-api-access-hm4lq\") pod \"cinder-operator-controller-manager-69cf5d4557-f6w2s\" (UID: \"6ab09cf8-68a3-4fac-aa7b-824a2626b3db\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.768340 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.770604 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhrsz\" (UniqueName: \"kubernetes.io/projected/9b69bbaa-8277-48b5-b0aa-f8d7b7fec647-kube-api-access-rhrsz\") pod \"heat-operator-controller-manager-594c8c9d5d-gzdtf\" (UID: \"9b69bbaa-8277-48b5-b0aa-f8d7b7fec647\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.771367 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82j85\" (UniqueName: \"kubernetes.io/projected/7e33ab4c-7c48-478a-ae66-119b3ba39b19-kube-api-access-82j85\") pod \"horizon-operator-controller-manager-77d5c5b54f-6hgqs\" (UID: \"7e33ab4c-7c48-478a-ae66-119b3ba39b19\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.772152 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmd7r\" (UniqueName: \"kubernetes.io/projected/7671c11a-cd18-4409-addc-caeb99b22cae-kube-api-access-kmd7r\") pod \"ironic-operator-controller-manager-598f7747c9-6lkpf\" (UID: \"7671c11a-cd18-4409-addc-caeb99b22cae\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.828975 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjx4c\" (UniqueName: \"kubernetes.io/projected/5cfbb45f-ca05-4e78-8392-b905640374fd-kube-api-access-gjx4c\") pod \"keystone-operator-controller-manager-b8b6d4659-lc2n5\" (UID: \"5cfbb45f-ca05-4e78-8392-b905640374fd\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.829035 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrlb\" (UniqueName: \"kubernetes.io/projected/29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8-kube-api-access-ckrlb\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-lz99v\" (UID: \"29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.829114 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fp5t\" (UniqueName: \"kubernetes.io/projected/194942ca-b627-42a1-9491-9e00b44d51d7-kube-api-access-7fp5t\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.829156 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7t7m\" (UniqueName: \"kubernetes.io/projected/7215e9f7-67ae-40ad-befc-e411c5acb586-kube-api-access-m7t7m\") pod \"manila-operator-controller-manager-78c6999f6f-qn8hz\" (UID: \"7215e9f7-67ae-40ad-befc-e411c5acb586\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.829280 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:05:58 crc kubenswrapper[4606]: E0123 17:05:58.829497 4606 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 17:05:58 crc kubenswrapper[4606]: E0123 17:05:58.829559 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert podName:194942ca-b627-42a1-9491-9e00b44d51d7 nodeName:}" failed. No retries permitted until 2026-01-23 17:05:59.329539137 +0000 UTC m=+793.329731553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert") pod "infra-operator-controller-manager-58749ffdfb-jmspw" (UID: "194942ca-b627-42a1-9491-9e00b44d51d7") : secret "infra-operator-webhook-server-cert" not found Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.830358 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.835808 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.853284 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.866574 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.857923 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.882903 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bq96s" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.887579 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.900843 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fp5t\" (UniqueName: \"kubernetes.io/projected/194942ca-b627-42a1-9491-9e00b44d51d7-kube-api-access-7fp5t\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.900900 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.901656 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.902461 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.916366 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.922883 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rptcj" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.943122 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjx4c\" (UniqueName: \"kubernetes.io/projected/5cfbb45f-ca05-4e78-8392-b905640374fd-kube-api-access-gjx4c\") pod \"keystone-operator-controller-manager-b8b6d4659-lc2n5\" (UID: \"5cfbb45f-ca05-4e78-8392-b905640374fd\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.943157 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckrlb\" (UniqueName: \"kubernetes.io/projected/29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8-kube-api-access-ckrlb\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-lz99v\" (UID: \"29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.943193 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7t7m\" (UniqueName: \"kubernetes.io/projected/7215e9f7-67ae-40ad-befc-e411c5acb586-kube-api-access-m7t7m\") pod \"manila-operator-controller-manager-78c6999f6f-qn8hz\" (UID: \"7215e9f7-67ae-40ad-befc-e411c5acb586\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.957982 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.973628 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.986866 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9"] Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.989948 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:05:58 crc kubenswrapper[4606]: I0123 17:05:58.993194 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xr9l2" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.001648 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7t7m\" (UniqueName: \"kubernetes.io/projected/7215e9f7-67ae-40ad-befc-e411c5acb586-kube-api-access-m7t7m\") pod \"manila-operator-controller-manager-78c6999f6f-qn8hz\" (UID: \"7215e9f7-67ae-40ad-befc-e411c5acb586\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.040416 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.041254 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.042314 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjx4c\" (UniqueName: \"kubernetes.io/projected/5cfbb45f-ca05-4e78-8392-b905640374fd-kube-api-access-gjx4c\") pod \"keystone-operator-controller-manager-b8b6d4659-lc2n5\" (UID: \"5cfbb45f-ca05-4e78-8392-b905640374fd\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.045689 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbt6\" (UniqueName: \"kubernetes.io/projected/23ed80ea-dfab-4552-86d1-972563d8d4b6-kube-api-access-jmbt6\") pod \"neutron-operator-controller-manager-78d58447c5-sf5dg\" (UID: \"23ed80ea-dfab-4552-86d1-972563d8d4b6\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.045721 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbcg5\" (UniqueName: \"kubernetes.io/projected/9ae13310-86a4-4a3f-9967-cb88d1bfa536-kube-api-access-qbcg5\") pod \"nova-operator-controller-manager-6b8bc8d87d-dv22m\" (UID: \"9ae13310-86a4-4a3f-9967-cb88d1bfa536\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.045781 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wjtf\" (UniqueName: \"kubernetes.io/projected/62e61905-d16f-41cc-b972-f788ced5c022-kube-api-access-9wjtf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.045805 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.045841 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8rw\" (UniqueName: \"kubernetes.io/projected/3112e8c9-b0cb-4d5e-bba9-1caf429b45dc-kube-api-access-zg8rw\") pod \"octavia-operator-controller-manager-7bd9774b6-kngd9\" (UID: \"3112e8c9-b0cb-4d5e-bba9-1caf429b45dc\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.056472 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.056728 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sknqq" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.061883 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.081210 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckrlb\" (UniqueName: \"kubernetes.io/projected/29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8-kube-api-access-ckrlb\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-lz99v\" (UID: \"29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.084672 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.095370 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.096033 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.104117 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-njw5g" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.118650 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.129639 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.130705 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.135001 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fgd6m" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.135985 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.136756 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.141189 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-d265k" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.146807 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wjtf\" (UniqueName: \"kubernetes.io/projected/62e61905-d16f-41cc-b972-f788ced5c022-kube-api-access-9wjtf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.146866 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjlrl\" (UniqueName: \"kubernetes.io/projected/0ba239ef-03fb-4744-95cb-5aea8d9b0663-kube-api-access-wjlrl\") pod \"ovn-operator-controller-manager-55db956ddc-ftzlw\" (UID: \"0ba239ef-03fb-4744-95cb-5aea8d9b0663\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.146901 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.146954 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg8rw\" (UniqueName: \"kubernetes.io/projected/3112e8c9-b0cb-4d5e-bba9-1caf429b45dc-kube-api-access-zg8rw\") pod \"octavia-operator-controller-manager-7bd9774b6-kngd9\" (UID: \"3112e8c9-b0cb-4d5e-bba9-1caf429b45dc\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.147017 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbt6\" (UniqueName: \"kubernetes.io/projected/23ed80ea-dfab-4552-86d1-972563d8d4b6-kube-api-access-jmbt6\") pod \"neutron-operator-controller-manager-78d58447c5-sf5dg\" (UID: \"23ed80ea-dfab-4552-86d1-972563d8d4b6\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.147043 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbcg5\" (UniqueName: \"kubernetes.io/projected/9ae13310-86a4-4a3f-9967-cb88d1bfa536-kube-api-access-qbcg5\") pod \"nova-operator-controller-manager-6b8bc8d87d-dv22m\" (UID: \"9ae13310-86a4-4a3f-9967-cb88d1bfa536\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.147068 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqs5\" (UniqueName: \"kubernetes.io/projected/d040f5d8-e2a8-4171-9f70-797e87f8af96-kube-api-access-8hqs5\") pod \"swift-operator-controller-manager-547cbdb99f-ttmg4\" (UID: \"d040f5d8-e2a8-4171-9f70-797e87f8af96\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.147093 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8z4g\" (UniqueName: \"kubernetes.io/projected/566e35f9-5c83-4267-bcf6-898458e0a722-kube-api-access-g8z4g\") pod \"placement-operator-controller-manager-5d646b7d76-5bmr6\" (UID: \"566e35f9-5c83-4267-bcf6-898458e0a722\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.147235 4606 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.147287 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert podName:62e61905-d16f-41cc-b972-f788ced5c022 nodeName:}" failed. No retries permitted until 2026-01-23 17:05:59.647268523 +0000 UTC m=+793.647460929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" (UID: "62e61905-d16f-41cc-b972-f788ced5c022") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.151688 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.157287 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.170073 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.176710 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.178879 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wjtf\" (UniqueName: \"kubernetes.io/projected/62e61905-d16f-41cc-b972-f788ced5c022-kube-api-access-9wjtf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.210377 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.220187 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbcg5\" (UniqueName: \"kubernetes.io/projected/9ae13310-86a4-4a3f-9967-cb88d1bfa536-kube-api-access-qbcg5\") pod \"nova-operator-controller-manager-6b8bc8d87d-dv22m\" (UID: \"9ae13310-86a4-4a3f-9967-cb88d1bfa536\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.223246 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbt6\" (UniqueName: \"kubernetes.io/projected/23ed80ea-dfab-4552-86d1-972563d8d4b6-kube-api-access-jmbt6\") pod \"neutron-operator-controller-manager-78d58447c5-sf5dg\" (UID: \"23ed80ea-dfab-4552-86d1-972563d8d4b6\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.226498 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg8rw\" (UniqueName: \"kubernetes.io/projected/3112e8c9-b0cb-4d5e-bba9-1caf429b45dc-kube-api-access-zg8rw\") pod \"octavia-operator-controller-manager-7bd9774b6-kngd9\" (UID: \"3112e8c9-b0cb-4d5e-bba9-1caf429b45dc\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.233038 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.235505 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.252025 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-p7nqh" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.255222 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqs5\" (UniqueName: \"kubernetes.io/projected/d040f5d8-e2a8-4171-9f70-797e87f8af96-kube-api-access-8hqs5\") pod \"swift-operator-controller-manager-547cbdb99f-ttmg4\" (UID: \"d040f5d8-e2a8-4171-9f70-797e87f8af96\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.255253 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8z4g\" (UniqueName: \"kubernetes.io/projected/566e35f9-5c83-4267-bcf6-898458e0a722-kube-api-access-g8z4g\") pod \"placement-operator-controller-manager-5d646b7d76-5bmr6\" (UID: \"566e35f9-5c83-4267-bcf6-898458e0a722\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.255319 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjlrl\" (UniqueName: \"kubernetes.io/projected/0ba239ef-03fb-4744-95cb-5aea8d9b0663-kube-api-access-wjlrl\") pod \"ovn-operator-controller-manager-55db956ddc-ftzlw\" (UID: \"0ba239ef-03fb-4744-95cb-5aea8d9b0663\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.256565 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.266389 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.278823 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.280088 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.287285 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjlrl\" (UniqueName: \"kubernetes.io/projected/0ba239ef-03fb-4744-95cb-5aea8d9b0663-kube-api-access-wjlrl\") pod \"ovn-operator-controller-manager-55db956ddc-ftzlw\" (UID: \"0ba239ef-03fb-4744-95cb-5aea8d9b0663\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.304177 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-f68z4" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.310153 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqs5\" (UniqueName: \"kubernetes.io/projected/d040f5d8-e2a8-4171-9f70-797e87f8af96-kube-api-access-8hqs5\") pod \"swift-operator-controller-manager-547cbdb99f-ttmg4\" (UID: \"d040f5d8-e2a8-4171-9f70-797e87f8af96\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.317021 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.321662 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.330989 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.342208 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2m76w" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.356535 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.359756 4606 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.359833 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert podName:194942ca-b627-42a1-9491-9e00b44d51d7 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:00.359806652 +0000 UTC m=+794.359999058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert") pod "infra-operator-controller-manager-58749ffdfb-jmspw" (UID: "194942ca-b627-42a1-9491-9e00b44d51d7") : secret "infra-operator-webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.361229 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8z4g\" (UniqueName: \"kubernetes.io/projected/566e35f9-5c83-4267-bcf6-898458e0a722-kube-api-access-g8z4g\") pod \"placement-operator-controller-manager-5d646b7d76-5bmr6\" (UID: \"566e35f9-5c83-4267-bcf6-898458e0a722\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.368442 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrp8\" (UniqueName: \"kubernetes.io/projected/cb8c2204-661e-4daf-b748-c329e81e04a8-kube-api-access-rkrp8\") pod \"telemetry-operator-controller-manager-85cd9769bb-h97mh\" (UID: \"cb8c2204-661e-4daf-b748-c329e81e04a8\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.390665 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.423038 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.457569 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.469504 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz2ch\" (UniqueName: \"kubernetes.io/projected/14094ea9-8be6-405b-a990-9bdbda2a50cb-kube-api-access-lz2ch\") pod \"test-operator-controller-manager-69797bbcbd-2z4jl\" (UID: \"14094ea9-8be6-405b-a990-9bdbda2a50cb\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.469547 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xtf\" (UniqueName: \"kubernetes.io/projected/ae465942-87ad-4a76-8ead-5367413aa9a1-kube-api-access-44xtf\") pod \"watcher-operator-controller-manager-6d9458688d-dnlq6\" (UID: \"ae465942-87ad-4a76-8ead-5367413aa9a1\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.469655 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkrp8\" (UniqueName: \"kubernetes.io/projected/cb8c2204-661e-4daf-b748-c329e81e04a8-kube-api-access-rkrp8\") pod \"telemetry-operator-controller-manager-85cd9769bb-h97mh\" (UID: \"cb8c2204-661e-4daf-b748-c329e81e04a8\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.481107 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.501572 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkrp8\" (UniqueName: \"kubernetes.io/projected/cb8c2204-661e-4daf-b748-c329e81e04a8-kube-api-access-rkrp8\") pod \"telemetry-operator-controller-manager-85cd9769bb-h97mh\" (UID: \"cb8c2204-661e-4daf-b748-c329e81e04a8\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.533952 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.571983 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.577233 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz2ch\" (UniqueName: \"kubernetes.io/projected/14094ea9-8be6-405b-a990-9bdbda2a50cb-kube-api-access-lz2ch\") pod \"test-operator-controller-manager-69797bbcbd-2z4jl\" (UID: \"14094ea9-8be6-405b-a990-9bdbda2a50cb\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.584930 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xtf\" (UniqueName: \"kubernetes.io/projected/ae465942-87ad-4a76-8ead-5367413aa9a1-kube-api-access-44xtf\") pod \"watcher-operator-controller-manager-6d9458688d-dnlq6\" (UID: \"ae465942-87ad-4a76-8ead-5367413aa9a1\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.602488 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.605574 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.622555 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h5glf" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.624380 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.624955 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.663275 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.667081 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.702019 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.676197 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xtf\" (UniqueName: \"kubernetes.io/projected/ae465942-87ad-4a76-8ead-5367413aa9a1-kube-api-access-44xtf\") pod \"watcher-operator-controller-manager-6d9458688d-dnlq6\" (UID: \"ae465942-87ad-4a76-8ead-5367413aa9a1\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.686809 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.687732 4606 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.702301 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert podName:62e61905-d16f-41cc-b972-f788ced5c022 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:00.702269261 +0000 UTC m=+794.702461667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" (UID: "62e61905-d16f-41cc-b972-f788ced5c022") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.702522 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.677886 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz2ch\" (UniqueName: \"kubernetes.io/projected/14094ea9-8be6-405b-a990-9bdbda2a50cb-kube-api-access-lz2ch\") pod \"test-operator-controller-manager-69797bbcbd-2z4jl\" (UID: \"14094ea9-8be6-405b-a990-9bdbda2a50cb\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.708760 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.708890 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.718452 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kfww6" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.804253 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6d8\" (UniqueName: \"kubernetes.io/projected/4d18bd93-7560-4405-9738-38876237699e-kube-api-access-nv6d8\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.804323 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.804342 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dx2h\" (UniqueName: \"kubernetes.io/projected/f8394efb-8969-47c9-98ec-b9845c0c8583-kube-api-access-9dx2h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tbffj\" (UID: \"f8394efb-8969-47c9-98ec-b9845c0c8583\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.804359 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.873521 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f"] Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.906853 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.906892 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dx2h\" (UniqueName: \"kubernetes.io/projected/f8394efb-8969-47c9-98ec-b9845c0c8583-kube-api-access-9dx2h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tbffj\" (UID: \"f8394efb-8969-47c9-98ec-b9845c0c8583\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.906945 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.907047 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6d8\" (UniqueName: \"kubernetes.io/projected/4d18bd93-7560-4405-9738-38876237699e-kube-api-access-nv6d8\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.907456 4606 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.907546 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:00.407527037 +0000 UTC m=+794.407719443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "webhook-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.907753 4606 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: E0123 17:05:59.909417 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:00.407827364 +0000 UTC m=+794.408019770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "metrics-server-cert" not found Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.928300 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6d8\" (UniqueName: \"kubernetes.io/projected/4d18bd93-7560-4405-9738-38876237699e-kube-api-access-nv6d8\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.929387 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dx2h\" (UniqueName: \"kubernetes.io/projected/f8394efb-8969-47c9-98ec-b9845c0c8583-kube-api-access-9dx2h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tbffj\" (UID: \"f8394efb-8969-47c9-98ec-b9845c0c8583\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.934748 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.934797 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:05:59 crc kubenswrapper[4606]: I0123 17:05:59.997922 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.131140 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.156703 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.170920 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.183428 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x"] Jan 23 17:06:00 crc kubenswrapper[4606]: W0123 17:06:00.204360 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc58688d9_32d0_4a57_a216_1110af3dd7dc.slice/crio-3fd4dc0f8f1cd8d2ada611aa7c56c608836c4b309fac46f33a834569c78ba73d WatchSource:0}: Error finding container 3fd4dc0f8f1cd8d2ada611aa7c56c608836c4b309fac46f33a834569c78ba73d: Status 404 returned error can't find the container with id 3fd4dc0f8f1cd8d2ada611aa7c56c608836c4b309fac46f33a834569c78ba73d Jan 23 17:06:00 crc kubenswrapper[4606]: W0123 17:06:00.292660 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b69bbaa_8277_48b5_b0aa_f8d7b7fec647.slice/crio-a82b3f9a917605a9408c49fa9ae23aee1951bc2c54f1ef61a3b1013689469b0d WatchSource:0}: Error finding container a82b3f9a917605a9408c49fa9ae23aee1951bc2c54f1ef61a3b1013689469b0d: Status 404 returned error can't find the container with id a82b3f9a917605a9408c49fa9ae23aee1951bc2c54f1ef61a3b1013689469b0d Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.294282 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.308139 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.426203 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.426284 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.426380 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.426414 4606 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.426576 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:01.426559319 +0000 UTC m=+795.426751725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "metrics-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.426480 4606 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.426495 4606 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.426973 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:01.426952859 +0000 UTC m=+795.427145375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "webhook-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.426994 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert podName:194942ca-b627-42a1-9491-9e00b44d51d7 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:02.42698405 +0000 UTC m=+796.427176596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert") pod "infra-operator-controller-manager-58749ffdfb-jmspw" (UID: "194942ca-b627-42a1-9491-9e00b44d51d7") : secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.616819 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v"] Jan 23 17:06:00 crc kubenswrapper[4606]: W0123 17:06:00.623380 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae13310_86a4_4a3f_9967_cb88d1bfa536.slice/crio-f98c75e8eec749372af5147dd1df8da5cf8075802e35d2872f342bccda1b9b9f WatchSource:0}: Error finding container f98c75e8eec749372af5147dd1df8da5cf8075802e35d2872f342bccda1b9b9f: Status 404 returned error can't find the container with id f98c75e8eec749372af5147dd1df8da5cf8075802e35d2872f342bccda1b9b9f Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.632446 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.642086 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.648856 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.677756 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.684535 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.697218 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.706843 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.710245 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.752745 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:00 crc kubenswrapper[4606]: W0123 17:06:00.752778 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3112e8c9_b0cb_4d5e_bba9_1caf429b45dc.slice/crio-8781a4b3d4a459231af9942b49a2f261eba89ffb5b3fd694656a8ba72b07326e WatchSource:0}: Error finding container 8781a4b3d4a459231af9942b49a2f261eba89ffb5b3fd694656a8ba72b07326e: Status 404 returned error can't find the container with id 8781a4b3d4a459231af9942b49a2f261eba89ffb5b3fd694656a8ba72b07326e Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.753045 4606 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.753094 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert podName:62e61905-d16f-41cc-b972-f788ced5c022 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:02.753080887 +0000 UTC m=+796.753273283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" (UID: "62e61905-d16f-41cc-b972-f788ced5c022") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.769653 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.774865 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.778780 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6"] Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.804111 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6"] Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.815036 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7t7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-qn8hz_openstack-operators(7215e9f7-67ae-40ad-befc-e411c5acb586): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.816460 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" event={"ID":"0ba239ef-03fb-4744-95cb-5aea8d9b0663","Type":"ContainerStarted","Data":"12fd26cc633f6d2bc22106d65e0fd2ee6a5496aab08d597d0c279a68e9f7aa34"} Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.816530 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" podUID="7215e9f7-67ae-40ad-befc-e411c5acb586" Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.822876 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" event={"ID":"23ed80ea-dfab-4552-86d1-972563d8d4b6","Type":"ContainerStarted","Data":"551bd9ea37605352e22d88b8891e5396ff3b1df037b6562d6c9a373640ecc372"} Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.824000 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44xtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-dnlq6_openstack-operators(ae465942-87ad-4a76-8ead-5367413aa9a1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.824222 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lz2ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-2z4jl_openstack-operators(14094ea9-8be6-405b-a990-9bdbda2a50cb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.824320 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkrp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-h97mh_openstack-operators(cb8c2204-661e-4daf-b748-c329e81e04a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.825143 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" podUID="ae465942-87ad-4a76-8ead-5367413aa9a1" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.825260 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podUID="14094ea9-8be6-405b-a990-9bdbda2a50cb" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.826361 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" podUID="cb8c2204-661e-4daf-b748-c329e81e04a8" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.842122 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8z4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-5bmr6_openstack-operators(566e35f9-5c83-4267-bcf6-898458e0a722): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 17:06:00 crc kubenswrapper[4606]: E0123 17:06:00.843633 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" podUID="566e35f9-5c83-4267-bcf6-898458e0a722" Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.847542 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" event={"ID":"5cfbb45f-ca05-4e78-8392-b905640374fd","Type":"ContainerStarted","Data":"1ca2fb247b9fa42190ee8f318e418e38b793e86159b12a7bd5b3433888ff7de4"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.851894 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" event={"ID":"9b69bbaa-8277-48b5-b0aa-f8d7b7fec647","Type":"ContainerStarted","Data":"a82b3f9a917605a9408c49fa9ae23aee1951bc2c54f1ef61a3b1013689469b0d"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.860855 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" event={"ID":"9ae13310-86a4-4a3f-9967-cb88d1bfa536","Type":"ContainerStarted","Data":"f98c75e8eec749372af5147dd1df8da5cf8075802e35d2872f342bccda1b9b9f"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.863075 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" event={"ID":"3112e8c9-b0cb-4d5e-bba9-1caf429b45dc","Type":"ContainerStarted","Data":"8781a4b3d4a459231af9942b49a2f261eba89ffb5b3fd694656a8ba72b07326e"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.864200 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" event={"ID":"29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8","Type":"ContainerStarted","Data":"3fd286d832d9c5f81f151ced346e34f3a7bdbd2e5085010846a3e45d86424d79"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.865248 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" event={"ID":"db44bfcf-35b0-4cd1-baef-f9e1879e863d","Type":"ContainerStarted","Data":"da3c220e21e147ce264a2e8ea70fe8918f91f8d717e43c5f6d84e811ccf3e37b"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.866711 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" event={"ID":"c58688d9-32d0-4a57-a216-1110af3dd7dc","Type":"ContainerStarted","Data":"3fd4dc0f8f1cd8d2ada611aa7c56c608836c4b309fac46f33a834569c78ba73d"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.868055 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" event={"ID":"7e33ab4c-7c48-478a-ae66-119b3ba39b19","Type":"ContainerStarted","Data":"59e42b86cc85a36584054dde65c6534e688d5f524939bc0220607f074f5155d8"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.871971 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" event={"ID":"9ca388c9-7c19-4efb-93da-2d889c972fa7","Type":"ContainerStarted","Data":"c515053c7187abd4fdf76ec38f4d4816c9368d02bfc421581f752cba5c6b366f"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.872871 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" event={"ID":"6ab09cf8-68a3-4fac-aa7b-824a2626b3db","Type":"ContainerStarted","Data":"c8d5a20133dbcf8a35dd1d79a1caae1c63a845173fc944dd69a6e5835e67bd06"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.874154 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" event={"ID":"7671c11a-cd18-4409-addc-caeb99b22cae","Type":"ContainerStarted","Data":"94650bdf35ec46aae4e192b8af8494cfa4b40ea4f3d03acfbd75eb238e11d606"} Jan 23 17:06:00 crc kubenswrapper[4606]: I0123 17:06:00.944313 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj"] Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.464936 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.464984 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.465338 4606 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.465385 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:03.465371403 +0000 UTC m=+797.465563809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "webhook-server-cert" not found Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.465817 4606 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.465994 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:03.465966948 +0000 UTC m=+797.466159414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "metrics-server-cert" not found Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.913895 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" event={"ID":"14094ea9-8be6-405b-a990-9bdbda2a50cb","Type":"ContainerStarted","Data":"b9a248733ea14063c8603501863dc1fc900bb2efa4839487892840552a6b1855"} Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.918820 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" event={"ID":"d040f5d8-e2a8-4171-9f70-797e87f8af96","Type":"ContainerStarted","Data":"76f50e7f3038a567fb79397253859746ed8c0ff64bf579273ace961a220724cc"} Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.919700 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podUID="14094ea9-8be6-405b-a990-9bdbda2a50cb" Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.920413 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" event={"ID":"7215e9f7-67ae-40ad-befc-e411c5acb586","Type":"ContainerStarted","Data":"9c6f023331e41da394f8660cfd6fb6c6ccd7fab384689d96526e1836de751532"} Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.924560 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" event={"ID":"ae465942-87ad-4a76-8ead-5367413aa9a1","Type":"ContainerStarted","Data":"dfd557ea494034892e8a3b8eca07acc21ae60e06aad6d918f0f1e14db5efd634"} Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.926443 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" event={"ID":"cb8c2204-661e-4daf-b748-c329e81e04a8","Type":"ContainerStarted","Data":"cf79f049dc1578a85fe54d0d926bce81913487241fe2973aa4258baad7e909f5"} Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.926950 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" podUID="ae465942-87ad-4a76-8ead-5367413aa9a1" Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.927940 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" podUID="cb8c2204-661e-4daf-b748-c329e81e04a8" Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.927986 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" podUID="7215e9f7-67ae-40ad-befc-e411c5acb586" Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.931411 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" event={"ID":"f8394efb-8969-47c9-98ec-b9845c0c8583","Type":"ContainerStarted","Data":"4e13e03f4dab4fbca8809dcb208a62c3ecba528cffd8f58cb51c2d39343dbef1"} Jan 23 17:06:01 crc kubenswrapper[4606]: I0123 17:06:01.961743 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" event={"ID":"566e35f9-5c83-4267-bcf6-898458e0a722","Type":"ContainerStarted","Data":"f1e95e35dc451938eb42ac8d7410cc4bbc5ec6d28e91d4a6856dac6514d687d4"} Jan 23 17:06:01 crc kubenswrapper[4606]: E0123 17:06:01.967354 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" podUID="566e35f9-5c83-4267-bcf6-898458e0a722" Jan 23 17:06:02 crc kubenswrapper[4606]: I0123 17:06:02.482871 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.484628 4606 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.485152 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert podName:194942ca-b627-42a1-9491-9e00b44d51d7 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:06.485138198 +0000 UTC m=+800.485330604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert") pod "infra-operator-controller-manager-58749ffdfb-jmspw" (UID: "194942ca-b627-42a1-9491-9e00b44d51d7") : secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:02 crc kubenswrapper[4606]: I0123 17:06:02.786190 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.786427 4606 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.786726 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert podName:62e61905-d16f-41cc-b972-f788ced5c022 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:06.786705947 +0000 UTC m=+800.786898353 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" (UID: "62e61905-d16f-41cc-b972-f788ced5c022") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.976007 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" podUID="cb8c2204-661e-4daf-b748-c329e81e04a8" Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.976267 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" podUID="566e35f9-5c83-4267-bcf6-898458e0a722" Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.976232 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" podUID="ae465942-87ad-4a76-8ead-5367413aa9a1" Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.976577 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" podUID="7215e9f7-67ae-40ad-befc-e411c5acb586" Jan 23 17:06:02 crc kubenswrapper[4606]: E0123 17:06:02.977805 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podUID="14094ea9-8be6-405b-a990-9bdbda2a50cb" Jan 23 17:06:03 crc kubenswrapper[4606]: I0123 17:06:03.497427 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:03 crc kubenswrapper[4606]: I0123 17:06:03.497475 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:03 crc kubenswrapper[4606]: E0123 17:06:03.497656 4606 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 17:06:03 crc kubenswrapper[4606]: E0123 17:06:03.497750 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:07.497732012 +0000 UTC m=+801.497924428 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "metrics-server-cert" not found Jan 23 17:06:03 crc kubenswrapper[4606]: E0123 17:06:03.497656 4606 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 17:06:03 crc kubenswrapper[4606]: E0123 17:06:03.497875 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:07.497863395 +0000 UTC m=+801.498055811 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "webhook-server-cert" not found Jan 23 17:06:06 crc kubenswrapper[4606]: I0123 17:06:06.548376 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:06 crc kubenswrapper[4606]: E0123 17:06:06.548843 4606 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:06 crc kubenswrapper[4606]: E0123 17:06:06.548896 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert podName:194942ca-b627-42a1-9491-9e00b44d51d7 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:14.54888358 +0000 UTC m=+808.549075986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert") pod "infra-operator-controller-manager-58749ffdfb-jmspw" (UID: "194942ca-b627-42a1-9491-9e00b44d51d7") : secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:06 crc kubenswrapper[4606]: I0123 17:06:06.852563 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:06 crc kubenswrapper[4606]: E0123 17:06:06.852767 4606 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:06 crc kubenswrapper[4606]: E0123 17:06:06.853122 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert podName:62e61905-d16f-41cc-b972-f788ced5c022 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:14.853100736 +0000 UTC m=+808.853293142 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" (UID: "62e61905-d16f-41cc-b972-f788ced5c022") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:07 crc kubenswrapper[4606]: I0123 17:06:07.562869 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:07 crc kubenswrapper[4606]: I0123 17:06:07.562953 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:07 crc kubenswrapper[4606]: E0123 17:06:07.563029 4606 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 17:06:07 crc kubenswrapper[4606]: E0123 17:06:07.563111 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:15.563085934 +0000 UTC m=+809.563278360 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "metrics-server-cert" not found Jan 23 17:06:07 crc kubenswrapper[4606]: E0123 17:06:07.564119 4606 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 17:06:07 crc kubenswrapper[4606]: E0123 17:06:07.564185 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:15.564166712 +0000 UTC m=+809.564359138 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "webhook-server-cert" not found Jan 23 17:06:14 crc kubenswrapper[4606]: I0123 17:06:14.572719 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:14 crc kubenswrapper[4606]: E0123 17:06:14.572922 4606 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:14 crc kubenswrapper[4606]: E0123 17:06:14.573285 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert podName:194942ca-b627-42a1-9491-9e00b44d51d7 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:30.573271401 +0000 UTC m=+824.573463807 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert") pod "infra-operator-controller-manager-58749ffdfb-jmspw" (UID: "194942ca-b627-42a1-9491-9e00b44d51d7") : secret "infra-operator-webhook-server-cert" not found Jan 23 17:06:14 crc kubenswrapper[4606]: I0123 17:06:14.877839 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:14 crc kubenswrapper[4606]: E0123 17:06:14.878037 4606 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:14 crc kubenswrapper[4606]: E0123 17:06:14.878139 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert podName:62e61905-d16f-41cc-b972-f788ced5c022 nodeName:}" failed. No retries permitted until 2026-01-23 17:06:30.878116653 +0000 UTC m=+824.878309059 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" (UID: "62e61905-d16f-41cc-b972-f788ced5c022") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 17:06:15 crc kubenswrapper[4606]: I0123 17:06:15.588637 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:15 crc kubenswrapper[4606]: I0123 17:06:15.588692 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:15 crc kubenswrapper[4606]: E0123 17:06:15.588903 4606 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 17:06:15 crc kubenswrapper[4606]: E0123 17:06:15.589004 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs podName:4d18bd93-7560-4405-9738-38876237699e nodeName:}" failed. No retries permitted until 2026-01-23 17:06:31.588979014 +0000 UTC m=+825.589171450 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs") pod "openstack-operator-controller-manager-86756cb559-lfhdj" (UID: "4d18bd93-7560-4405-9738-38876237699e") : secret "metrics-server-cert" not found Jan 23 17:06:15 crc kubenswrapper[4606]: I0123 17:06:15.597744 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-webhook-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:19 crc kubenswrapper[4606]: E0123 17:06:19.901374 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 23 17:06:19 crc kubenswrapper[4606]: E0123 17:06:19.903026 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjlrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-ftzlw_openstack-operators(0ba239ef-03fb-4744-95cb-5aea8d9b0663): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:19 crc kubenswrapper[4606]: E0123 17:06:19.904282 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" podUID="0ba239ef-03fb-4744-95cb-5aea8d9b0663" Jan 23 17:06:20 crc kubenswrapper[4606]: E0123 17:06:20.153508 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" podUID="0ba239ef-03fb-4744-95cb-5aea8d9b0663" Jan 23 17:06:21 crc kubenswrapper[4606]: E0123 17:06:21.461135 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 23 17:06:21 crc kubenswrapper[4606]: E0123 17:06:21.461336 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhrsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-gzdtf_openstack-operators(9b69bbaa-8277-48b5-b0aa-f8d7b7fec647): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:21 crc kubenswrapper[4606]: E0123 17:06:21.462634 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" podUID="9b69bbaa-8277-48b5-b0aa-f8d7b7fec647" Jan 23 17:06:22 crc kubenswrapper[4606]: E0123 17:06:22.183244 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" podUID="9b69bbaa-8277-48b5-b0aa-f8d7b7fec647" Jan 23 17:06:25 crc kubenswrapper[4606]: E0123 17:06:25.092269 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 23 17:06:25 crc kubenswrapper[4606]: E0123 17:06:25.092837 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82j85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-6hgqs_openstack-operators(7e33ab4c-7c48-478a-ae66-119b3ba39b19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:25 crc kubenswrapper[4606]: E0123 17:06:25.094063 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" podUID="7e33ab4c-7c48-478a-ae66-119b3ba39b19" Jan 23 17:06:25 crc kubenswrapper[4606]: E0123 17:06:25.180893 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" podUID="7e33ab4c-7c48-478a-ae66-119b3ba39b19" Jan 23 17:06:26 crc kubenswrapper[4606]: E0123 17:06:26.668009 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 23 17:06:26 crc kubenswrapper[4606]: E0123 17:06:26.668200 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmbt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-sf5dg_openstack-operators(23ed80ea-dfab-4552-86d1-972563d8d4b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:26 crc kubenswrapper[4606]: E0123 17:06:26.669281 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" podUID="23ed80ea-dfab-4552-86d1-972563d8d4b6" Jan 23 17:06:27 crc kubenswrapper[4606]: E0123 17:06:27.197437 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" podUID="23ed80ea-dfab-4552-86d1-972563d8d4b6" Jan 23 17:06:27 crc kubenswrapper[4606]: E0123 17:06:27.248842 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 23 17:06:27 crc kubenswrapper[4606]: E0123 17:06:27.249358 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckrlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-lz99v_openstack-operators(29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:27 crc kubenswrapper[4606]: E0123 17:06:27.251743 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" podUID="29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8" Jan 23 17:06:28 crc kubenswrapper[4606]: E0123 17:06:28.204621 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" podUID="29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8" Jan 23 17:06:29 crc kubenswrapper[4606]: I0123 17:06:29.933477 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:06:29 crc kubenswrapper[4606]: I0123 17:06:29.933542 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:06:30 crc kubenswrapper[4606]: I0123 17:06:30.583609 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:30 crc kubenswrapper[4606]: I0123 17:06:30.593452 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/194942ca-b627-42a1-9491-9e00b44d51d7-cert\") pod \"infra-operator-controller-manager-58749ffdfb-jmspw\" (UID: \"194942ca-b627-42a1-9491-9e00b44d51d7\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:30 crc kubenswrapper[4606]: I0123 17:06:30.668751 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:30 crc kubenswrapper[4606]: I0123 17:06:30.896135 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:30 crc kubenswrapper[4606]: I0123 17:06:30.905226 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62e61905-d16f-41cc-b972-f788ced5c022-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx\" (UID: \"62e61905-d16f-41cc-b972-f788ced5c022\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:30 crc kubenswrapper[4606]: I0123 17:06:30.937149 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:31 crc kubenswrapper[4606]: I0123 17:06:31.607531 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:31 crc kubenswrapper[4606]: I0123 17:06:31.621824 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d18bd93-7560-4405-9738-38876237699e-metrics-certs\") pod \"openstack-operator-controller-manager-86756cb559-lfhdj\" (UID: \"4d18bd93-7560-4405-9738-38876237699e\") " pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:31 crc kubenswrapper[4606]: I0123 17:06:31.635001 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:33 crc kubenswrapper[4606]: E0123 17:06:33.110186 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 23 17:06:33 crc kubenswrapper[4606]: E0123 17:06:33.110434 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rw965,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-wbg8x_openstack-operators(9ca388c9-7c19-4efb-93da-2d889c972fa7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:33 crc kubenswrapper[4606]: E0123 17:06:33.113008 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" podUID="9ca388c9-7c19-4efb-93da-2d889c972fa7" Jan 23 17:06:33 crc kubenswrapper[4606]: E0123 17:06:33.238860 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" podUID="9ca388c9-7c19-4efb-93da-2d889c972fa7" Jan 23 17:06:36 crc kubenswrapper[4606]: E0123 17:06:36.529691 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 23 17:06:36 crc kubenswrapper[4606]: E0123 17:06:36.530231 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hqs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-ttmg4_openstack-operators(d040f5d8-e2a8-4171-9f70-797e87f8af96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:36 crc kubenswrapper[4606]: E0123 17:06:36.532036 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" podUID="d040f5d8-e2a8-4171-9f70-797e87f8af96" Jan 23 17:06:36 crc kubenswrapper[4606]: E0123 17:06:36.990612 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 17:06:36 crc kubenswrapper[4606]: E0123 17:06:36.992716 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9dx2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tbffj_openstack-operators(f8394efb-8969-47c9-98ec-b9845c0c8583): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:36 crc kubenswrapper[4606]: E0123 17:06:36.994045 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" podUID="f8394efb-8969-47c9-98ec-b9845c0c8583" Jan 23 17:06:37 crc kubenswrapper[4606]: E0123 17:06:37.262758 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" podUID="d040f5d8-e2a8-4171-9f70-797e87f8af96" Jan 23 17:06:37 crc kubenswrapper[4606]: E0123 17:06:37.262902 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" podUID="f8394efb-8969-47c9-98ec-b9845c0c8583" Jan 23 17:06:37 crc kubenswrapper[4606]: E0123 17:06:37.575419 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 17:06:37 crc kubenswrapper[4606]: E0123 17:06:37.575664 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkrp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-h97mh_openstack-operators(cb8c2204-661e-4daf-b748-c329e81e04a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:37 crc kubenswrapper[4606]: E0123 17:06:37.576850 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" podUID="cb8c2204-661e-4daf-b748-c329e81e04a8" Jan 23 17:06:38 crc kubenswrapper[4606]: E0123 17:06:38.107615 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551" Jan 23 17:06:38 crc kubenswrapper[4606]: E0123 17:06:38.107790 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44xtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-dnlq6_openstack-operators(ae465942-87ad-4a76-8ead-5367413aa9a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:38 crc kubenswrapper[4606]: E0123 17:06:38.109007 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" podUID="ae465942-87ad-4a76-8ead-5367413aa9a1" Jan 23 17:06:38 crc kubenswrapper[4606]: E0123 17:06:38.610501 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 23 17:06:38 crc kubenswrapper[4606]: E0123 17:06:38.610786 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8z4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-5bmr6_openstack-operators(566e35f9-5c83-4267-bcf6-898458e0a722): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:38 crc kubenswrapper[4606]: E0123 17:06:38.612612 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" podUID="566e35f9-5c83-4267-bcf6-898458e0a722" Jan 23 17:06:39 crc kubenswrapper[4606]: E0123 17:06:39.082617 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 23 17:06:39 crc kubenswrapper[4606]: E0123 17:06:39.082799 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lz2ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-2z4jl_openstack-operators(14094ea9-8be6-405b-a990-9bdbda2a50cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:39 crc kubenswrapper[4606]: E0123 17:06:39.084168 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podUID="14094ea9-8be6-405b-a990-9bdbda2a50cb" Jan 23 17:06:39 crc kubenswrapper[4606]: E0123 17:06:39.581812 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 17:06:39 crc kubenswrapper[4606]: E0123 17:06:39.582017 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbcg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-dv22m_openstack-operators(9ae13310-86a4-4a3f-9967-cb88d1bfa536): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:39 crc kubenswrapper[4606]: E0123 17:06:39.583938 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podUID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" Jan 23 17:06:40 crc kubenswrapper[4606]: E0123 17:06:40.110183 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 17:06:40 crc kubenswrapper[4606]: E0123 17:06:40.110399 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gjx4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-lc2n5_openstack-operators(5cfbb45f-ca05-4e78-8392-b905640374fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:06:40 crc kubenswrapper[4606]: E0123 17:06:40.111731 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" podUID="5cfbb45f-ca05-4e78-8392-b905640374fd" Jan 23 17:06:40 crc kubenswrapper[4606]: E0123 17:06:40.305573 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podUID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" Jan 23 17:06:40 crc kubenswrapper[4606]: E0123 17:06:40.306214 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" podUID="5cfbb45f-ca05-4e78-8392-b905640374fd" Jan 23 17:06:40 crc kubenswrapper[4606]: I0123 17:06:40.654727 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx"] Jan 23 17:06:40 crc kubenswrapper[4606]: W0123 17:06:40.666748 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62e61905_d16f_41cc_b972_f788ced5c022.slice/crio-de47366abd4e994ce918ad59fea79d494cb0bbfbf118b5b8b13d074d6d06af86 WatchSource:0}: Error finding container de47366abd4e994ce918ad59fea79d494cb0bbfbf118b5b8b13d074d6d06af86: Status 404 returned error can't find the container with id de47366abd4e994ce918ad59fea79d494cb0bbfbf118b5b8b13d074d6d06af86 Jan 23 17:06:40 crc kubenswrapper[4606]: I0123 17:06:40.785123 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw"] Jan 23 17:06:40 crc kubenswrapper[4606]: I0123 17:06:40.817104 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj"] Jan 23 17:06:40 crc kubenswrapper[4606]: W0123 17:06:40.825816 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d18bd93_7560_4405_9738_38876237699e.slice/crio-8ff4c3637d7383881e9097b4d077f2d45e3202b6fb2bf67e74ee554746b897d5 WatchSource:0}: Error finding container 8ff4c3637d7383881e9097b4d077f2d45e3202b6fb2bf67e74ee554746b897d5: Status 404 returned error can't find the container with id 8ff4c3637d7383881e9097b4d077f2d45e3202b6fb2bf67e74ee554746b897d5 Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.301625 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" event={"ID":"7671c11a-cd18-4409-addc-caeb99b22cae","Type":"ContainerStarted","Data":"d31b3b01761301f3010c7aab892d128e8f582e7dbdc3451c6681ac02e0502dc4"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.301772 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.302711 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" event={"ID":"db44bfcf-35b0-4cd1-baef-f9e1879e863d","Type":"ContainerStarted","Data":"96029f98af57c98fd12f125cbbd373048ad0a5fe173644f77dd94ed526203696"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.302810 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.309429 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" event={"ID":"7e33ab4c-7c48-478a-ae66-119b3ba39b19","Type":"ContainerStarted","Data":"19d38761cbee73fe606be2b1b89967a01f2c86c0a2d8c1d752e461d9f86dbcf8"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.327276 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" event={"ID":"7215e9f7-67ae-40ad-befc-e411c5acb586","Type":"ContainerStarted","Data":"b7f3835ba76ab5d60ac6d1f1c813eebc94f58fd393d383deceae75c76b36c1f7"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.330158 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.330687 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" event={"ID":"62e61905-d16f-41cc-b972-f788ced5c022","Type":"ContainerStarted","Data":"de47366abd4e994ce918ad59fea79d494cb0bbfbf118b5b8b13d074d6d06af86"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.331439 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" event={"ID":"4d18bd93-7560-4405-9738-38876237699e","Type":"ContainerStarted","Data":"8ff4c3637d7383881e9097b4d077f2d45e3202b6fb2bf67e74ee554746b897d5"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.332347 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" event={"ID":"0ba239ef-03fb-4744-95cb-5aea8d9b0663","Type":"ContainerStarted","Data":"ce552519b70cfa075c326d7d2f49e96964f32489546902f3b108e31504c466e4"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.333101 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.340168 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" event={"ID":"c58688d9-32d0-4a57-a216-1110af3dd7dc","Type":"ContainerStarted","Data":"bc3c107e33675eadf9c74bf6862e2ef921ddab8aa6a6790d1749b73eee75384f"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.342057 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.351795 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" event={"ID":"194942ca-b627-42a1-9491-9e00b44d51d7","Type":"ContainerStarted","Data":"a17c5c3d5c47b5c774efe359c23b5845f9fcb5075ec6b991f0ab71d5bd9ee145"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.365179 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" event={"ID":"6ab09cf8-68a3-4fac-aa7b-824a2626b3db","Type":"ContainerStarted","Data":"4028760cdb68286d3bddbb85c782432df58ce062008e35f1e686b3b5f7adc898"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.366085 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.372621 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" event={"ID":"3112e8c9-b0cb-4d5e-bba9-1caf429b45dc","Type":"ContainerStarted","Data":"d951b30c33403956bf3327f2c4667fdd0f820a95d1626cad4c04589b5542e7c1"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.373432 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.375482 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" podStartSLOduration=5.111362109 podStartE2EDuration="43.375469906s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.340005451 +0000 UTC m=+794.340197857" lastFinishedPulling="2026-01-23 17:06:38.604113248 +0000 UTC m=+832.604305654" observedRunningTime="2026-01-23 17:06:41.369427643 +0000 UTC m=+835.369620049" watchObservedRunningTime="2026-01-23 17:06:41.375469906 +0000 UTC m=+835.375662302" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.386125 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" event={"ID":"23ed80ea-dfab-4552-86d1-972563d8d4b6","Type":"ContainerStarted","Data":"201ea2b9f34b84d8dc39d1249dbc35f1aba6f960dfcad7e65448a095f235de8c"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.386316 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.394822 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" event={"ID":"9b69bbaa-8277-48b5-b0aa-f8d7b7fec647","Type":"ContainerStarted","Data":"e5b65d866fef26a36bc70bc57dc8c2496dd193544d734f3d05578ebe1d08dab8"} Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.396416 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.525551 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" podStartSLOduration=5.136972061 podStartE2EDuration="43.525534442s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.216510122 +0000 UTC m=+794.216702528" lastFinishedPulling="2026-01-23 17:06:38.605072503 +0000 UTC m=+832.605264909" observedRunningTime="2026-01-23 17:06:41.501310202 +0000 UTC m=+835.501502628" watchObservedRunningTime="2026-01-23 17:06:41.525534442 +0000 UTC m=+835.525726848" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.528763 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" podStartSLOduration=4.826670754 podStartE2EDuration="43.528750343s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:05:59.901744132 +0000 UTC m=+793.901936538" lastFinishedPulling="2026-01-23 17:06:38.603823721 +0000 UTC m=+832.604016127" observedRunningTime="2026-01-23 17:06:41.524873845 +0000 UTC m=+835.525066251" watchObservedRunningTime="2026-01-23 17:06:41.528750343 +0000 UTC m=+835.528942749" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.552236 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" podStartSLOduration=4.140838263 podStartE2EDuration="43.552218424s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.766160736 +0000 UTC m=+794.766353142" lastFinishedPulling="2026-01-23 17:06:40.177540897 +0000 UTC m=+834.177733303" observedRunningTime="2026-01-23 17:06:41.547663439 +0000 UTC m=+835.547855855" watchObservedRunningTime="2026-01-23 17:06:41.552218424 +0000 UTC m=+835.552410830" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.598054 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" podStartSLOduration=4.110023718 podStartE2EDuration="43.598034347s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.692132593 +0000 UTC m=+794.692324999" lastFinishedPulling="2026-01-23 17:06:40.180143232 +0000 UTC m=+834.180335628" observedRunningTime="2026-01-23 17:06:41.591546243 +0000 UTC m=+835.591738649" watchObservedRunningTime="2026-01-23 17:06:41.598034347 +0000 UTC m=+835.598226763" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.641881 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" podStartSLOduration=5.196013418 podStartE2EDuration="43.64186458s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.159077717 +0000 UTC m=+794.159270123" lastFinishedPulling="2026-01-23 17:06:38.604928869 +0000 UTC m=+832.605121285" observedRunningTime="2026-01-23 17:06:41.62041951 +0000 UTC m=+835.620611916" watchObservedRunningTime="2026-01-23 17:06:41.64186458 +0000 UTC m=+835.642056986" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.680303 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" podStartSLOduration=4.356277825 podStartE2EDuration="43.680286366s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.814908563 +0000 UTC m=+794.815100969" lastFinishedPulling="2026-01-23 17:06:40.138917104 +0000 UTC m=+834.139109510" observedRunningTime="2026-01-23 17:06:41.67364595 +0000 UTC m=+835.673838356" watchObservedRunningTime="2026-01-23 17:06:41.680286366 +0000 UTC m=+835.680478772" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.773476 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" podStartSLOduration=3.8910402360000003 podStartE2EDuration="43.773460082s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.297663295 +0000 UTC m=+794.297855691" lastFinishedPulling="2026-01-23 17:06:40.180083131 +0000 UTC m=+834.180275537" observedRunningTime="2026-01-23 17:06:41.768128757 +0000 UTC m=+835.768321163" watchObservedRunningTime="2026-01-23 17:06:41.773460082 +0000 UTC m=+835.773652728" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.811696 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" podStartSLOduration=4.320443302 podStartE2EDuration="43.811681413s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.706551415 +0000 UTC m=+794.706743821" lastFinishedPulling="2026-01-23 17:06:40.197789526 +0000 UTC m=+834.197981932" observedRunningTime="2026-01-23 17:06:41.810715259 +0000 UTC m=+835.810907665" watchObservedRunningTime="2026-01-23 17:06:41.811681413 +0000 UTC m=+835.811873819" Jan 23 17:06:41 crc kubenswrapper[4606]: I0123 17:06:41.838854 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" podStartSLOduration=6.000419034 podStartE2EDuration="43.838835587s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.765773697 +0000 UTC m=+794.765966103" lastFinishedPulling="2026-01-23 17:06:38.60419024 +0000 UTC m=+832.604382656" observedRunningTime="2026-01-23 17:06:41.838103678 +0000 UTC m=+835.838296084" watchObservedRunningTime="2026-01-23 17:06:41.838835587 +0000 UTC m=+835.839027993" Jan 23 17:06:42 crc kubenswrapper[4606]: I0123 17:06:42.402955 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" event={"ID":"4d18bd93-7560-4405-9738-38876237699e","Type":"ContainerStarted","Data":"9e69fef21e5bbfa926ad36eb7c635a46b59eb0f3dbc968d43b5451a61c263e50"} Jan 23 17:06:42 crc kubenswrapper[4606]: I0123 17:06:42.431857 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" podStartSLOduration=43.431834301 podStartE2EDuration="43.431834301s" podCreationTimestamp="2026-01-23 17:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:06:42.425542923 +0000 UTC m=+836.425735349" watchObservedRunningTime="2026-01-23 17:06:42.431834301 +0000 UTC m=+836.432026707" Jan 23 17:06:43 crc kubenswrapper[4606]: I0123 17:06:43.409496 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:48 crc kubenswrapper[4606]: I0123 17:06:48.638253 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:06:48 crc kubenswrapper[4606]: I0123 17:06:48.734496 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:06:48 crc kubenswrapper[4606]: I0123 17:06:48.831864 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:06:48 crc kubenswrapper[4606]: I0123 17:06:48.833787 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:06:48 crc kubenswrapper[4606]: I0123 17:06:48.842232 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:06:48 crc kubenswrapper[4606]: I0123 17:06:48.869988 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:06:48 crc kubenswrapper[4606]: I0123 17:06:48.921162 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.179729 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:06:49 crc kubenswrapper[4606]: E0123 17:06:49.369334 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" podUID="cb8c2204-661e-4daf-b748-c329e81e04a8" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.429454 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.450776 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" event={"ID":"62e61905-d16f-41cc-b972-f788ced5c022","Type":"ContainerStarted","Data":"2382f522199b4d6e4b9e5dd15acb943accc7e5df8076c8c32c30cca10ddb913c"} Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.450914 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.452021 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" event={"ID":"194942ca-b627-42a1-9491-9e00b44d51d7","Type":"ContainerStarted","Data":"98e6e7318bdf75a9d67939ec7da3d523f345b87462b82f158c1159ef41cf7013"} Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.452123 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.453156 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" event={"ID":"29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8","Type":"ContainerStarted","Data":"550738b9d7e7c9b9802e2298b675134b434d186cffe338fd98704e835a9bd21c"} Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.453312 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.454187 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" event={"ID":"9ca388c9-7c19-4efb-93da-2d889c972fa7","Type":"ContainerStarted","Data":"fb6bb4f275a5cf666f03c01b72d9164b4bc8e9ec4fe9086f6b23b06f2caaa70f"} Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.454387 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.463442 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.503217 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" podStartSLOduration=43.738323117 podStartE2EDuration="51.503198026s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:40.668874012 +0000 UTC m=+834.669066418" lastFinishedPulling="2026-01-23 17:06:48.433748921 +0000 UTC m=+842.433941327" observedRunningTime="2026-01-23 17:06:49.499742349 +0000 UTC m=+843.499934785" watchObservedRunningTime="2026-01-23 17:06:49.503198026 +0000 UTC m=+843.503390432" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.534892 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" podStartSLOduration=3.721888179 podStartE2EDuration="51.534870413s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.611789841 +0000 UTC m=+794.611982247" lastFinishedPulling="2026-01-23 17:06:48.424772075 +0000 UTC m=+842.424964481" observedRunningTime="2026-01-23 17:06:49.528352668 +0000 UTC m=+843.528545084" watchObservedRunningTime="2026-01-23 17:06:49.534870413 +0000 UTC m=+843.535062819" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.538369 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.604104 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" podStartSLOduration=43.947378388 podStartE2EDuration="51.604087105s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:40.801045328 +0000 UTC m=+834.801237734" lastFinishedPulling="2026-01-23 17:06:48.457754045 +0000 UTC m=+842.457946451" observedRunningTime="2026-01-23 17:06:49.597970991 +0000 UTC m=+843.598163417" watchObservedRunningTime="2026-01-23 17:06:49.604087105 +0000 UTC m=+843.604279511" Jan 23 17:06:49 crc kubenswrapper[4606]: I0123 17:06:49.620724 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" podStartSLOduration=3.29479442 podStartE2EDuration="51.620705003s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.205791243 +0000 UTC m=+794.205983649" lastFinishedPulling="2026-01-23 17:06:48.531701826 +0000 UTC m=+842.531894232" observedRunningTime="2026-01-23 17:06:49.617377659 +0000 UTC m=+843.617570065" watchObservedRunningTime="2026-01-23 17:06:49.620705003 +0000 UTC m=+843.620897409" Jan 23 17:06:50 crc kubenswrapper[4606]: E0123 17:06:50.366684 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" podUID="566e35f9-5c83-4267-bcf6-898458e0a722" Jan 23 17:06:50 crc kubenswrapper[4606]: I0123 17:06:50.461511 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" event={"ID":"f8394efb-8969-47c9-98ec-b9845c0c8583","Type":"ContainerStarted","Data":"ac2db089da535252f89fd9751b17120303a08374a56deaa33f58203397b68345"} Jan 23 17:06:50 crc kubenswrapper[4606]: I0123 17:06:50.500302 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" podStartSLOduration=2.7152226539999997 podStartE2EDuration="51.500281773s" podCreationTimestamp="2026-01-23 17:05:59 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.966680452 +0000 UTC m=+794.966872858" lastFinishedPulling="2026-01-23 17:06:49.751739571 +0000 UTC m=+843.751931977" observedRunningTime="2026-01-23 17:06:50.493594146 +0000 UTC m=+844.493786552" watchObservedRunningTime="2026-01-23 17:06:50.500281773 +0000 UTC m=+844.500474179" Jan 23 17:06:51 crc kubenswrapper[4606]: E0123 17:06:51.366549 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podUID="14094ea9-8be6-405b-a990-9bdbda2a50cb" Jan 23 17:06:51 crc kubenswrapper[4606]: I0123 17:06:51.468748 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" event={"ID":"d040f5d8-e2a8-4171-9f70-797e87f8af96","Type":"ContainerStarted","Data":"97dde886bc9eae41c3e14ff9a1b3975bfc487e0a2c12ee5d3f8983e1c6d8edf6"} Jan 23 17:06:51 crc kubenswrapper[4606]: I0123 17:06:51.468965 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:06:51 crc kubenswrapper[4606]: I0123 17:06:51.482285 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" podStartSLOduration=3.41459151 podStartE2EDuration="53.482264399s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.814527943 +0000 UTC m=+794.814720349" lastFinishedPulling="2026-01-23 17:06:50.882200832 +0000 UTC m=+844.882393238" observedRunningTime="2026-01-23 17:06:51.480657882 +0000 UTC m=+845.480850308" watchObservedRunningTime="2026-01-23 17:06:51.482264399 +0000 UTC m=+845.482456805" Jan 23 17:06:51 crc kubenswrapper[4606]: I0123 17:06:51.640300 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:06:52 crc kubenswrapper[4606]: E0123 17:06:52.367273 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" podUID="ae465942-87ad-4a76-8ead-5367413aa9a1" Jan 23 17:06:54 crc kubenswrapper[4606]: I0123 17:06:54.490323 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" event={"ID":"9ae13310-86a4-4a3f-9967-cb88d1bfa536","Type":"ContainerStarted","Data":"4757c65b3ffd5c6be2c30c0e6b1e7dc3a563783965837e3f1fced5bb323d6ae1"} Jan 23 17:06:54 crc kubenswrapper[4606]: I0123 17:06:54.490857 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:06:54 crc kubenswrapper[4606]: I0123 17:06:54.492119 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" event={"ID":"5cfbb45f-ca05-4e78-8392-b905640374fd","Type":"ContainerStarted","Data":"7b2a4ea60e33d319d388f74e75810afbe6a8a7b99ab453e49f05212485e75dba"} Jan 23 17:06:54 crc kubenswrapper[4606]: I0123 17:06:54.492318 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:06:54 crc kubenswrapper[4606]: I0123 17:06:54.530522 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podStartSLOduration=3.225191335 podStartE2EDuration="56.530506879s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.626629104 +0000 UTC m=+794.626821510" lastFinishedPulling="2026-01-23 17:06:53.931944648 +0000 UTC m=+847.932137054" observedRunningTime="2026-01-23 17:06:54.521480998 +0000 UTC m=+848.521673404" watchObservedRunningTime="2026-01-23 17:06:54.530506879 +0000 UTC m=+848.530699285" Jan 23 17:06:54 crc kubenswrapper[4606]: I0123 17:06:54.548581 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" podStartSLOduration=3.231801416 podStartE2EDuration="56.548557383s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.611392261 +0000 UTC m=+794.611584667" lastFinishedPulling="2026-01-23 17:06:53.928148228 +0000 UTC m=+847.928340634" observedRunningTime="2026-01-23 17:06:54.541516888 +0000 UTC m=+848.541709294" watchObservedRunningTime="2026-01-23 17:06:54.548557383 +0000 UTC m=+848.548749839" Jan 23 17:06:58 crc kubenswrapper[4606]: I0123 17:06:58.961753 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.165337 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.216675 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.262528 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.577837 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.934223 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.934292 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.934372 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.935131 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f3c9d6abf9f7db362dce7fd403f9c2cfbf053099332de8a97b1275b83560c1e"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:06:59 crc kubenswrapper[4606]: I0123 17:06:59.935204 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://8f3c9d6abf9f7db362dce7fd403f9c2cfbf053099332de8a97b1275b83560c1e" gracePeriod=600 Jan 23 17:07:00 crc kubenswrapper[4606]: I0123 17:07:00.534113 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="8f3c9d6abf9f7db362dce7fd403f9c2cfbf053099332de8a97b1275b83560c1e" exitCode=0 Jan 23 17:07:00 crc kubenswrapper[4606]: I0123 17:07:00.534187 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"8f3c9d6abf9f7db362dce7fd403f9c2cfbf053099332de8a97b1275b83560c1e"} Jan 23 17:07:00 crc kubenswrapper[4606]: I0123 17:07:00.534674 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"3cef2b80453103e1f146f2cbf852884e416a5490d931dcd7771b99f87019e13d"} Jan 23 17:07:00 crc kubenswrapper[4606]: I0123 17:07:00.534702 4606 scope.go:117] "RemoveContainer" containerID="6bf772329cdfa9ebae83dfcb79df9cfd8a68a9a3c277497c6d856cec3d4e5cf8" Jan 23 17:07:00 crc kubenswrapper[4606]: I0123 17:07:00.674408 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:07:00 crc kubenswrapper[4606]: I0123 17:07:00.944274 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:07:02 crc kubenswrapper[4606]: I0123 17:07:02.368706 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:07:03 crc kubenswrapper[4606]: I0123 17:07:03.562575 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" event={"ID":"566e35f9-5c83-4267-bcf6-898458e0a722","Type":"ContainerStarted","Data":"936fd703289b400be2e820f661b3d1938292f4620df7e1c8f3d13cbaec3d0d31"} Jan 23 17:07:03 crc kubenswrapper[4606]: I0123 17:07:03.564035 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:07:03 crc kubenswrapper[4606]: I0123 17:07:03.583121 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" podStartSLOduration=3.509148386 podStartE2EDuration="1m5.583101137s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.842003375 +0000 UTC m=+794.842195781" lastFinishedPulling="2026-01-23 17:07:02.915956126 +0000 UTC m=+856.916148532" observedRunningTime="2026-01-23 17:07:03.576990513 +0000 UTC m=+857.577182919" watchObservedRunningTime="2026-01-23 17:07:03.583101137 +0000 UTC m=+857.583293543" Jan 23 17:07:04 crc kubenswrapper[4606]: I0123 17:07:04.571279 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" event={"ID":"cb8c2204-661e-4daf-b748-c329e81e04a8","Type":"ContainerStarted","Data":"6cd0ceffa4048046ab1e26ebd96b93a49a312d6196aa13bc66a1ecd0834dafc3"} Jan 23 17:07:04 crc kubenswrapper[4606]: I0123 17:07:04.571913 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:07:04 crc kubenswrapper[4606]: I0123 17:07:04.588302 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" podStartSLOduration=3.5036621869999998 podStartE2EDuration="1m6.588281678s" podCreationTimestamp="2026-01-23 17:05:58 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.824263668 +0000 UTC m=+794.824456074" lastFinishedPulling="2026-01-23 17:07:03.908883159 +0000 UTC m=+857.909075565" observedRunningTime="2026-01-23 17:07:04.585204975 +0000 UTC m=+858.585397381" watchObservedRunningTime="2026-01-23 17:07:04.588281678 +0000 UTC m=+858.588474124" Jan 23 17:07:06 crc kubenswrapper[4606]: I0123 17:07:06.584544 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" event={"ID":"ae465942-87ad-4a76-8ead-5367413aa9a1","Type":"ContainerStarted","Data":"39a6130538689482a2cd5a074a06d172c646a93ad9ab688144748331b23b02ec"} Jan 23 17:07:06 crc kubenswrapper[4606]: I0123 17:07:06.585496 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:07:06 crc kubenswrapper[4606]: I0123 17:07:06.600775 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" podStartSLOduration=2.493795584 podStartE2EDuration="1m7.6007605s" podCreationTimestamp="2026-01-23 17:05:59 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.8231194 +0000 UTC m=+794.823311806" lastFinishedPulling="2026-01-23 17:07:05.930084316 +0000 UTC m=+859.930276722" observedRunningTime="2026-01-23 17:07:06.599038039 +0000 UTC m=+860.599230445" watchObservedRunningTime="2026-01-23 17:07:06.6007605 +0000 UTC m=+860.600952906" Jan 23 17:07:07 crc kubenswrapper[4606]: I0123 17:07:07.591683 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" event={"ID":"14094ea9-8be6-405b-a990-9bdbda2a50cb","Type":"ContainerStarted","Data":"ca7cb9d83e6b712f445bea1bd445fd5730be210d6d99a6d1820cd7289746c110"} Jan 23 17:07:07 crc kubenswrapper[4606]: I0123 17:07:07.592174 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:07:07 crc kubenswrapper[4606]: I0123 17:07:07.605717 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podStartSLOduration=2.412490218 podStartE2EDuration="1m8.605704155s" podCreationTimestamp="2026-01-23 17:05:59 +0000 UTC" firstStartedPulling="2026-01-23 17:06:00.824071533 +0000 UTC m=+794.824263939" lastFinishedPulling="2026-01-23 17:07:07.01728545 +0000 UTC m=+861.017477876" observedRunningTime="2026-01-23 17:07:07.605562001 +0000 UTC m=+861.605754407" watchObservedRunningTime="2026-01-23 17:07:07.605704155 +0000 UTC m=+861.605896551" Jan 23 17:07:09 crc kubenswrapper[4606]: I0123 17:07:09.487548 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:07:09 crc kubenswrapper[4606]: I0123 17:07:09.669761 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:07:19 crc kubenswrapper[4606]: I0123 17:07:19.706300 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:07:20 crc kubenswrapper[4606]: I0123 17:07:20.002679 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.049569 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-njc6l"] Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.051351 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.053256 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.055364 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.055408 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qs7v4" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.063685 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.069310 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-njc6l"] Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.092129 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c6db33-b9df-4456-a935-1798d0a9746a-config\") pod \"dnsmasq-dns-675f4bcbfc-njc6l\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.092216 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tgw\" (UniqueName: \"kubernetes.io/projected/93c6db33-b9df-4456-a935-1798d0a9746a-kube-api-access-77tgw\") pod \"dnsmasq-dns-675f4bcbfc-njc6l\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.103709 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2729j"] Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.104818 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.107435 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.136123 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2729j"] Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.193710 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tgw\" (UniqueName: \"kubernetes.io/projected/93c6db33-b9df-4456-a935-1798d0a9746a-kube-api-access-77tgw\") pod \"dnsmasq-dns-675f4bcbfc-njc6l\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.194163 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c6db33-b9df-4456-a935-1798d0a9746a-config\") pod \"dnsmasq-dns-675f4bcbfc-njc6l\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.195645 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c6db33-b9df-4456-a935-1798d0a9746a-config\") pod \"dnsmasq-dns-675f4bcbfc-njc6l\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.212689 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tgw\" (UniqueName: \"kubernetes.io/projected/93c6db33-b9df-4456-a935-1798d0a9746a-kube-api-access-77tgw\") pod \"dnsmasq-dns-675f4bcbfc-njc6l\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.295110 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x9lf\" (UniqueName: \"kubernetes.io/projected/ba9414d7-a6fb-4615-8fa5-62a080f8d394-kube-api-access-4x9lf\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.295162 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-config\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.295199 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.369412 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.397023 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.397154 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x9lf\" (UniqueName: \"kubernetes.io/projected/ba9414d7-a6fb-4615-8fa5-62a080f8d394-kube-api-access-4x9lf\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.397188 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-config\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.398152 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-config\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.399021 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.423029 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x9lf\" (UniqueName: \"kubernetes.io/projected/ba9414d7-a6fb-4615-8fa5-62a080f8d394-kube-api-access-4x9lf\") pod \"dnsmasq-dns-78dd6ddcc-2729j\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.719100 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:07:43 crc kubenswrapper[4606]: I0123 17:07:43.819109 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-njc6l"] Jan 23 17:07:43 crc kubenswrapper[4606]: W0123 17:07:43.835616 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93c6db33_b9df_4456_a935_1798d0a9746a.slice/crio-bda187477ba8ef5267389977a1bc4e878814f88c922e208322fa0bc8b26bcd9e WatchSource:0}: Error finding container bda187477ba8ef5267389977a1bc4e878814f88c922e208322fa0bc8b26bcd9e: Status 404 returned error can't find the container with id bda187477ba8ef5267389977a1bc4e878814f88c922e208322fa0bc8b26bcd9e Jan 23 17:07:44 crc kubenswrapper[4606]: I0123 17:07:44.145319 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2729j"] Jan 23 17:07:44 crc kubenswrapper[4606]: W0123 17:07:44.146689 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba9414d7_a6fb_4615_8fa5_62a080f8d394.slice/crio-40fd3be30dfb5d8b255faaeff3937ca84d447ee8aac144738484d1e4a72a9784 WatchSource:0}: Error finding container 40fd3be30dfb5d8b255faaeff3937ca84d447ee8aac144738484d1e4a72a9784: Status 404 returned error can't find the container with id 40fd3be30dfb5d8b255faaeff3937ca84d447ee8aac144738484d1e4a72a9784 Jan 23 17:07:44 crc kubenswrapper[4606]: I0123 17:07:44.849370 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" event={"ID":"93c6db33-b9df-4456-a935-1798d0a9746a","Type":"ContainerStarted","Data":"bda187477ba8ef5267389977a1bc4e878814f88c922e208322fa0bc8b26bcd9e"} Jan 23 17:07:44 crc kubenswrapper[4606]: I0123 17:07:44.850890 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" event={"ID":"ba9414d7-a6fb-4615-8fa5-62a080f8d394","Type":"ContainerStarted","Data":"40fd3be30dfb5d8b255faaeff3937ca84d447ee8aac144738484d1e4a72a9784"} Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.052147 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-njc6l"] Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.086028 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2cgtz"] Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.087504 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.125990 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2cgtz"] Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.265834 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.265872 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtz9\" (UniqueName: \"kubernetes.io/projected/4139c6c4-3725-4dbc-931c-8d8ccab53974-kube-api-access-dmtz9\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.265908 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-config\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.366821 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.366868 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtz9\" (UniqueName: \"kubernetes.io/projected/4139c6c4-3725-4dbc-931c-8d8ccab53974-kube-api-access-dmtz9\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.366913 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-config\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.367834 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-config\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.368359 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.423691 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtz9\" (UniqueName: \"kubernetes.io/projected/4139c6c4-3725-4dbc-931c-8d8ccab53974-kube-api-access-dmtz9\") pod \"dnsmasq-dns-666b6646f7-2cgtz\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.477680 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2729j"] Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.504696 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kfwtp"] Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.506130 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.532457 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kfwtp"] Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.574275 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.574458 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-config\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.574958 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8clx\" (UniqueName: \"kubernetes.io/projected/fa0b5879-6896-403b-8565-7399b44e9ee2-kube-api-access-f8clx\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.676055 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8clx\" (UniqueName: \"kubernetes.io/projected/fa0b5879-6896-403b-8565-7399b44e9ee2-kube-api-access-f8clx\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.676139 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.676206 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-config\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.677297 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-config\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.678242 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.697358 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8clx\" (UniqueName: \"kubernetes.io/projected/fa0b5879-6896-403b-8565-7399b44e9ee2-kube-api-access-f8clx\") pod \"dnsmasq-dns-57d769cc4f-kfwtp\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.709916 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:07:46 crc kubenswrapper[4606]: I0123 17:07:46.844914 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.379187 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2cgtz"] Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.412656 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.414252 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.424542 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.424618 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bbn8s" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.424817 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.424914 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.425010 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.425128 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.425651 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 17:07:47 crc kubenswrapper[4606]: W0123 17:07:47.426398 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4139c6c4_3725_4dbc_931c_8d8ccab53974.slice/crio-6a8f2ac27b602918fd0c938272eb8cc8979e553f773ef5ca4c75b27db1901a7a WatchSource:0}: Error finding container 6a8f2ac27b602918fd0c938272eb8cc8979e553f773ef5ca4c75b27db1901a7a: Status 404 returned error can't find the container with id 6a8f2ac27b602918fd0c938272eb8cc8979e553f773ef5ca4c75b27db1901a7a Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.427809 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.498485 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kfwtp"] Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.599299 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.601706 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.601943 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhb2\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-kube-api-access-wzhb2\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602110 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602202 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602294 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602374 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602458 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602534 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602643 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.602715 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.622993 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.627322 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.634779 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.635025 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.635937 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qpkp2" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.636174 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.636339 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.636363 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.636495 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.641314 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.704641 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705032 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705064 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705205 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705231 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705261 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705278 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705301 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705332 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705349 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzhb2\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-kube-api-access-wzhb2\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.705378 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.706747 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.707031 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.709453 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.709889 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.710379 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.710635 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.727258 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.727561 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzhb2\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-kube-api-access-wzhb2\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.736382 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.736971 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.738965 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.759885 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.806838 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf778ec6-6b7b-492e-948c-0314dc6b26ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.806890 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.806928 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.806955 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.806991 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8hdn\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-kube-api-access-n8hdn\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.807014 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.807064 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.807084 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.807126 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.807151 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.807174 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf778ec6-6b7b-492e-948c-0314dc6b26ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.815023 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908121 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908210 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908267 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908297 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908319 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf778ec6-6b7b-492e-948c-0314dc6b26ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908443 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf778ec6-6b7b-492e-948c-0314dc6b26ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908472 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908509 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908532 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.908565 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8hdn\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-kube-api-access-n8hdn\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.909375 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.910409 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.910464 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.910697 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.910785 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.911096 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.912136 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.914438 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.915893 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf778ec6-6b7b-492e-948c-0314dc6b26ba-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.917040 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.917418 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf778ec6-6b7b-492e-948c-0314dc6b26ba-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.924108 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" event={"ID":"fa0b5879-6896-403b-8565-7399b44e9ee2","Type":"ContainerStarted","Data":"9a246af0f77c2f090f68354b7d9dd96866858e7f6cbd092e0b46ef950a9f63e9"} Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.926446 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" event={"ID":"4139c6c4-3725-4dbc-931c-8d8ccab53974","Type":"ContainerStarted","Data":"6a8f2ac27b602918fd0c938272eb8cc8979e553f773ef5ca4c75b27db1901a7a"} Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.929299 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8hdn\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-kube-api-access-n8hdn\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.935304 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:47 crc kubenswrapper[4606]: I0123 17:07:47.970002 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.317846 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.449732 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.585947 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.587495 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.593921 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.593992 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.594978 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.600179 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tmwt7" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.627637 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.658002 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729165 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-operator-scripts\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729219 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-config-data-default\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729243 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/684acded-8efb-47ea-8bb4-0212874a6826-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729313 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729332 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqsxs\" (UniqueName: \"kubernetes.io/projected/684acded-8efb-47ea-8bb4-0212874a6826-kube-api-access-mqsxs\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729354 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684acded-8efb-47ea-8bb4-0212874a6826-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729398 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-kolla-config\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.729420 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/684acded-8efb-47ea-8bb4-0212874a6826-config-data-generated\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.830394 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-operator-scripts\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.830812 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-config-data-default\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.830840 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/684acded-8efb-47ea-8bb4-0212874a6826-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.830948 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.830972 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsxs\" (UniqueName: \"kubernetes.io/projected/684acded-8efb-47ea-8bb4-0212874a6826-kube-api-access-mqsxs\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.830997 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684acded-8efb-47ea-8bb4-0212874a6826-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.831053 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-kolla-config\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.831074 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/684acded-8efb-47ea-8bb4-0212874a6826-config-data-generated\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.831533 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/684acded-8efb-47ea-8bb4-0212874a6826-config-data-generated\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.832359 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.835775 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-kolla-config\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.837607 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-config-data-default\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.839986 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684acded-8efb-47ea-8bb4-0212874a6826-operator-scripts\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.842361 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/684acded-8efb-47ea-8bb4-0212874a6826-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.845879 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684acded-8efb-47ea-8bb4-0212874a6826-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.862773 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsxs\" (UniqueName: \"kubernetes.io/projected/684acded-8efb-47ea-8bb4-0212874a6826-kube-api-access-mqsxs\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.868157 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"684acded-8efb-47ea-8bb4-0212874a6826\") " pod="openstack/openstack-galera-0" Jan 23 17:07:48 crc kubenswrapper[4606]: I0123 17:07:48.918942 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 17:07:49 crc kubenswrapper[4606]: I0123 17:07:49.924845 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 17:07:49 crc kubenswrapper[4606]: I0123 17:07:49.926666 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:49 crc kubenswrapper[4606]: I0123 17:07:49.930167 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 17:07:49 crc kubenswrapper[4606]: I0123 17:07:49.930235 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 17:07:49 crc kubenswrapper[4606]: I0123 17:07:49.930167 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vwp4v" Jan 23 17:07:49 crc kubenswrapper[4606]: I0123 17:07:49.930526 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 17:07:49 crc kubenswrapper[4606]: I0123 17:07:49.940440 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051072 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsbff\" (UniqueName: \"kubernetes.io/projected/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-kube-api-access-fsbff\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051132 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051196 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051253 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051280 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051438 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051533 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.051568 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.152762 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.152811 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.152870 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.152945 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.153077 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.154153 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.154327 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.154430 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.154500 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.154696 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsbff\" (UniqueName: \"kubernetes.io/projected/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-kube-api-access-fsbff\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.154732 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.154761 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.155298 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.163431 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.175191 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.185579 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.188642 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsbff\" (UniqueName: \"kubernetes.io/projected/b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030-kube-api-access-fsbff\") pod \"openstack-cell1-galera-0\" (UID: \"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030\") " pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.231295 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.232656 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.236242 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.236781 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kqgz5" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.236912 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.258035 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.299366 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.357542 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lp7\" (UniqueName: \"kubernetes.io/projected/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-kube-api-access-d5lp7\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.357882 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.357981 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-config-data\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.358104 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-kolla-config\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.358203 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.460030 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-kolla-config\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.460099 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.460187 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.460208 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lp7\" (UniqueName: \"kubernetes.io/projected/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-kube-api-access-d5lp7\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.460233 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-config-data\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.461189 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-kolla-config\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.462388 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-config-data\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.464253 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.483106 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.496329 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lp7\" (UniqueName: \"kubernetes.io/projected/c5ffebdb-76b6-40da-b0e0-b48d596fdea8-kube-api-access-d5lp7\") pod \"memcached-0\" (UID: \"c5ffebdb-76b6-40da-b0e0-b48d596fdea8\") " pod="openstack/memcached-0" Jan 23 17:07:50 crc kubenswrapper[4606]: I0123 17:07:50.563931 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.067352 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.072875 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.075459 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gslrz" Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.083223 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.189794 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9njtl\" (UniqueName: \"kubernetes.io/projected/49d23d39-bb62-4d74-9053-c98f530d7208-kube-api-access-9njtl\") pod \"kube-state-metrics-0\" (UID: \"49d23d39-bb62-4d74-9053-c98f530d7208\") " pod="openstack/kube-state-metrics-0" Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.291302 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9njtl\" (UniqueName: \"kubernetes.io/projected/49d23d39-bb62-4d74-9053-c98f530d7208-kube-api-access-9njtl\") pod \"kube-state-metrics-0\" (UID: \"49d23d39-bb62-4d74-9053-c98f530d7208\") " pod="openstack/kube-state-metrics-0" Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.319901 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9njtl\" (UniqueName: \"kubernetes.io/projected/49d23d39-bb62-4d74-9053-c98f530d7208-kube-api-access-9njtl\") pod \"kube-state-metrics-0\" (UID: \"49d23d39-bb62-4d74-9053-c98f530d7208\") " pod="openstack/kube-state-metrics-0" Jan 23 17:07:52 crc kubenswrapper[4606]: I0123 17:07:52.411689 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 17:07:54 crc kubenswrapper[4606]: W0123 17:07:54.226873 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf778ec6_6b7b_492e_948c_0314dc6b26ba.slice/crio-307f6ff974e9b8eb96a291c11302ea1a8cec3a6338c7fbe14ffae2727dc6e9a9 WatchSource:0}: Error finding container 307f6ff974e9b8eb96a291c11302ea1a8cec3a6338c7fbe14ffae2727dc6e9a9: Status 404 returned error can't find the container with id 307f6ff974e9b8eb96a291c11302ea1a8cec3a6338c7fbe14ffae2727dc6e9a9 Jan 23 17:07:54 crc kubenswrapper[4606]: I0123 17:07:54.981790 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf778ec6-6b7b-492e-948c-0314dc6b26ba","Type":"ContainerStarted","Data":"307f6ff974e9b8eb96a291c11302ea1a8cec3a6338c7fbe14ffae2727dc6e9a9"} Jan 23 17:07:54 crc kubenswrapper[4606]: I0123 17:07:54.983453 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8","Type":"ContainerStarted","Data":"81f46f171c07e571c99c09658d060c6dfd773a605fceac16e0952b0b72680e24"} Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.272033 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5sjkh"] Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.273464 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.283096 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.283819 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.284072 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5hx8t" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.289887 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5sjkh"] Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.330370 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tmz28"] Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.331989 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.341146 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmz28"] Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.358741 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-run\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.358825 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-run-ovn\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.358848 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7063b991-5662-4335-9e9e-73f057367708-ovn-controller-tls-certs\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.358882 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7063b991-5662-4335-9e9e-73f057367708-scripts\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.358910 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh269\" (UniqueName: \"kubernetes.io/projected/7063b991-5662-4335-9e9e-73f057367708-kube-api-access-jh269\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.358947 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063b991-5662-4335-9e9e-73f057367708-combined-ca-bundle\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.358972 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-log-ovn\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.460977 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/222efa1d-6b61-49a8-8b6e-ce7276718806-scripts\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461035 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-etc-ovs\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461063 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5c2\" (UniqueName: \"kubernetes.io/projected/222efa1d-6b61-49a8-8b6e-ce7276718806-kube-api-access-fg5c2\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461111 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-run-ovn\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461135 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-lib\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461157 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7063b991-5662-4335-9e9e-73f057367708-ovn-controller-tls-certs\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461193 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7063b991-5662-4335-9e9e-73f057367708-scripts\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461226 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh269\" (UniqueName: \"kubernetes.io/projected/7063b991-5662-4335-9e9e-73f057367708-kube-api-access-jh269\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461260 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063b991-5662-4335-9e9e-73f057367708-combined-ca-bundle\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461278 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-run\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461297 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-log\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461322 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-log-ovn\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461342 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-run\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461845 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-run-ovn\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.461858 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-log-ovn\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.462272 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7063b991-5662-4335-9e9e-73f057367708-var-run\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.463731 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7063b991-5662-4335-9e9e-73f057367708-scripts\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.482316 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063b991-5662-4335-9e9e-73f057367708-combined-ca-bundle\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.483141 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7063b991-5662-4335-9e9e-73f057367708-ovn-controller-tls-certs\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.494846 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh269\" (UniqueName: \"kubernetes.io/projected/7063b991-5662-4335-9e9e-73f057367708-kube-api-access-jh269\") pod \"ovn-controller-5sjkh\" (UID: \"7063b991-5662-4335-9e9e-73f057367708\") " pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563057 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-lib\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563177 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-run\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563196 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-log\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563300 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-run\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563348 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/222efa1d-6b61-49a8-8b6e-ce7276718806-scripts\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563370 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-etc-ovs\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563397 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5c2\" (UniqueName: \"kubernetes.io/projected/222efa1d-6b61-49a8-8b6e-ce7276718806-kube-api-access-fg5c2\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563418 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-lib\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563489 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-var-log\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.563967 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/222efa1d-6b61-49a8-8b6e-ce7276718806-etc-ovs\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.579143 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5c2\" (UniqueName: \"kubernetes.io/projected/222efa1d-6b61-49a8-8b6e-ce7276718806-kube-api-access-fg5c2\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.596733 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5sjkh" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.612663 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.613987 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.616477 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-f5qpg" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.616788 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.617307 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.617495 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.617700 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.630158 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765537 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0221a06d-cf70-4013-8ba4-41d8e7515bb3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765653 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0221a06d-cf70-4013-8ba4-41d8e7515bb3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765686 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765734 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765806 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765828 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0221a06d-cf70-4013-8ba4-41d8e7515bb3-config\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765852 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.765867 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpll8\" (UniqueName: \"kubernetes.io/projected/0221a06d-cf70-4013-8ba4-41d8e7515bb3-kube-api-access-hpll8\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866699 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0221a06d-cf70-4013-8ba4-41d8e7515bb3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866747 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0221a06d-cf70-4013-8ba4-41d8e7515bb3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866779 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866831 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866864 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866885 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0221a06d-cf70-4013-8ba4-41d8e7515bb3-config\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866905 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.866925 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpll8\" (UniqueName: \"kubernetes.io/projected/0221a06d-cf70-4013-8ba4-41d8e7515bb3-kube-api-access-hpll8\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.867227 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0221a06d-cf70-4013-8ba4-41d8e7515bb3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.867463 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.867874 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0221a06d-cf70-4013-8ba4-41d8e7515bb3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.867981 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0221a06d-cf70-4013-8ba4-41d8e7515bb3-config\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.874753 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.878778 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.883435 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0221a06d-cf70-4013-8ba4-41d8e7515bb3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:56 crc kubenswrapper[4606]: I0123 17:07:56.888848 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:57 crc kubenswrapper[4606]: I0123 17:07:57.698181 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/222efa1d-6b61-49a8-8b6e-ce7276718806-scripts\") pod \"ovn-controller-ovs-tmz28\" (UID: \"222efa1d-6b61-49a8-8b6e-ce7276718806\") " pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:07:57 crc kubenswrapper[4606]: I0123 17:07:57.717708 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpll8\" (UniqueName: \"kubernetes.io/projected/0221a06d-cf70-4013-8ba4-41d8e7515bb3-kube-api-access-hpll8\") pod \"ovsdbserver-nb-0\" (UID: \"0221a06d-cf70-4013-8ba4-41d8e7515bb3\") " pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:57 crc kubenswrapper[4606]: I0123 17:07:57.838018 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 17:07:57 crc kubenswrapper[4606]: I0123 17:07:57.869906 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.457930 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.459919 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.461832 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.463292 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-crtm2" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.464574 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.464669 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.486278 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.522302 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14568d10-e2cf-4f22-9c72-62924ba927e8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.522373 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14568d10-e2cf-4f22-9c72-62924ba927e8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.522392 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14568d10-e2cf-4f22-9c72-62924ba927e8-config\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.522535 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.522579 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.522761 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.522911 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.523024 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbj5l\" (UniqueName: \"kubernetes.io/projected/14568d10-e2cf-4f22-9c72-62924ba927e8-kube-api-access-cbj5l\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624010 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624061 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624132 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624166 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624215 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbj5l\" (UniqueName: \"kubernetes.io/projected/14568d10-e2cf-4f22-9c72-62924ba927e8-kube-api-access-cbj5l\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624272 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14568d10-e2cf-4f22-9c72-62924ba927e8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624309 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14568d10-e2cf-4f22-9c72-62924ba927e8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624330 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14568d10-e2cf-4f22-9c72-62924ba927e8-config\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.624673 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.625477 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14568d10-e2cf-4f22-9c72-62924ba927e8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.625686 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14568d10-e2cf-4f22-9c72-62924ba927e8-config\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.627066 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14568d10-e2cf-4f22-9c72-62924ba927e8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.630296 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.630819 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.642178 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14568d10-e2cf-4f22-9c72-62924ba927e8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.647481 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbj5l\" (UniqueName: \"kubernetes.io/projected/14568d10-e2cf-4f22-9c72-62924ba927e8-kube-api-access-cbj5l\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.681936 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14568d10-e2cf-4f22-9c72-62924ba927e8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:00 crc kubenswrapper[4606]: I0123 17:08:00.787160 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:04 crc kubenswrapper[4606]: E0123 17:08:04.177129 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 17:08:04 crc kubenswrapper[4606]: E0123 17:08:04.177576 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4x9lf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2729j_openstack(ba9414d7-a6fb-4615-8fa5-62a080f8d394): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:08:04 crc kubenswrapper[4606]: E0123 17:08:04.178845 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" podUID="ba9414d7-a6fb-4615-8fa5-62a080f8d394" Jan 23 17:08:04 crc kubenswrapper[4606]: E0123 17:08:04.238811 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 17:08:04 crc kubenswrapper[4606]: E0123 17:08:04.238943 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77tgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-njc6l_openstack(93c6db33-b9df-4456-a935-1798d0a9746a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:08:04 crc kubenswrapper[4606]: E0123 17:08:04.240157 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" podUID="93c6db33-b9df-4456-a935-1798d0a9746a" Jan 23 17:08:04 crc kubenswrapper[4606]: I0123 17:08:04.567986 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 17:08:05 crc kubenswrapper[4606]: W0123 17:08:05.595641 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5ffebdb_76b6_40da_b0e0_b48d596fdea8.slice/crio-95f723032aa0f499aff1d0ef8147b3c28b9a03e0fde5118db8782e61695247f2 WatchSource:0}: Error finding container 95f723032aa0f499aff1d0ef8147b3c28b9a03e0fde5118db8782e61695247f2: Status 404 returned error can't find the container with id 95f723032aa0f499aff1d0ef8147b3c28b9a03e0fde5118db8782e61695247f2 Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.786946 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.788622 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.824446 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x9lf\" (UniqueName: \"kubernetes.io/projected/ba9414d7-a6fb-4615-8fa5-62a080f8d394-kube-api-access-4x9lf\") pod \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.825012 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c6db33-b9df-4456-a935-1798d0a9746a-config\") pod \"93c6db33-b9df-4456-a935-1798d0a9746a\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.825102 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-dns-svc\") pod \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.825541 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c6db33-b9df-4456-a935-1798d0a9746a-config" (OuterVolumeSpecName: "config") pod "93c6db33-b9df-4456-a935-1798d0a9746a" (UID: "93c6db33-b9df-4456-a935-1798d0a9746a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.826006 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-config" (OuterVolumeSpecName: "config") pod "ba9414d7-a6fb-4615-8fa5-62a080f8d394" (UID: "ba9414d7-a6fb-4615-8fa5-62a080f8d394"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.826380 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba9414d7-a6fb-4615-8fa5-62a080f8d394" (UID: "ba9414d7-a6fb-4615-8fa5-62a080f8d394"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.826410 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-config\") pod \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\" (UID: \"ba9414d7-a6fb-4615-8fa5-62a080f8d394\") " Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.826468 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77tgw\" (UniqueName: \"kubernetes.io/projected/93c6db33-b9df-4456-a935-1798d0a9746a-kube-api-access-77tgw\") pod \"93c6db33-b9df-4456-a935-1798d0a9746a\" (UID: \"93c6db33-b9df-4456-a935-1798d0a9746a\") " Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.827206 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93c6db33-b9df-4456-a935-1798d0a9746a-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.827220 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.827229 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba9414d7-a6fb-4615-8fa5-62a080f8d394-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.835771 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba9414d7-a6fb-4615-8fa5-62a080f8d394-kube-api-access-4x9lf" (OuterVolumeSpecName: "kube-api-access-4x9lf") pod "ba9414d7-a6fb-4615-8fa5-62a080f8d394" (UID: "ba9414d7-a6fb-4615-8fa5-62a080f8d394"). InnerVolumeSpecName "kube-api-access-4x9lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.840743 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c6db33-b9df-4456-a935-1798d0a9746a-kube-api-access-77tgw" (OuterVolumeSpecName: "kube-api-access-77tgw") pod "93c6db33-b9df-4456-a935-1798d0a9746a" (UID: "93c6db33-b9df-4456-a935-1798d0a9746a"). InnerVolumeSpecName "kube-api-access-77tgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.929388 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77tgw\" (UniqueName: \"kubernetes.io/projected/93c6db33-b9df-4456-a935-1798d0a9746a-kube-api-access-77tgw\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:05 crc kubenswrapper[4606]: I0123 17:08:05.929417 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x9lf\" (UniqueName: \"kubernetes.io/projected/ba9414d7-a6fb-4615-8fa5-62a080f8d394-kube-api-access-4x9lf\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.080059 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" event={"ID":"93c6db33-b9df-4456-a935-1798d0a9746a","Type":"ContainerDied","Data":"bda187477ba8ef5267389977a1bc4e878814f88c922e208322fa0bc8b26bcd9e"} Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.080144 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-njc6l" Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.091539 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" event={"ID":"ba9414d7-a6fb-4615-8fa5-62a080f8d394","Type":"ContainerDied","Data":"40fd3be30dfb5d8b255faaeff3937ca84d447ee8aac144738484d1e4a72a9784"} Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.091652 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2729j" Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.095065 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c5ffebdb-76b6-40da-b0e0-b48d596fdea8","Type":"ContainerStarted","Data":"95f723032aa0f499aff1d0ef8147b3c28b9a03e0fde5118db8782e61695247f2"} Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.097323 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" event={"ID":"fa0b5879-6896-403b-8565-7399b44e9ee2","Type":"ContainerStarted","Data":"d87a3fdbebce3cf7c99dcea7c659025da4ba40f056f52986ddc22dd970fb9120"} Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.237072 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-njc6l"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.243539 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-njc6l"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.282696 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2729j"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.289948 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2729j"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.330430 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.384579 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c6db33-b9df-4456-a935-1798d0a9746a" path="/var/lib/kubelet/pods/93c6db33-b9df-4456-a935-1798d0a9746a/volumes" Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.384962 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba9414d7-a6fb-4615-8fa5-62a080f8d394" path="/var/lib/kubelet/pods/ba9414d7-a6fb-4615-8fa5-62a080f8d394/volumes" Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.655133 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.667569 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5sjkh"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.732476 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.749452 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.799703 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 17:08:06 crc kubenswrapper[4606]: W0123 17:08:06.826384 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0221a06d_cf70_4013_8ba4_41d8e7515bb3.slice/crio-ee3aba784213fb6901a90b650474442ed4a608cc644e5e01d4ae61c8ea66d373 WatchSource:0}: Error finding container ee3aba784213fb6901a90b650474442ed4a608cc644e5e01d4ae61c8ea66d373: Status 404 returned error can't find the container with id ee3aba784213fb6901a90b650474442ed4a608cc644e5e01d4ae61c8ea66d373 Jan 23 17:08:06 crc kubenswrapper[4606]: W0123 17:08:06.827772 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8c963c6_bbae_4e9b_9c6e_7bd0fbf10030.slice/crio-e75b6b74a0965d140038448945c2e45c1af5a151e5c3f4e7556749fa0b60c5ea WatchSource:0}: Error finding container e75b6b74a0965d140038448945c2e45c1af5a151e5c3f4e7556749fa0b60c5ea: Status 404 returned error can't find the container with id e75b6b74a0965d140038448945c2e45c1af5a151e5c3f4e7556749fa0b60c5ea Jan 23 17:08:06 crc kubenswrapper[4606]: W0123 17:08:06.836747 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14568d10_e2cf_4f22_9c72_62924ba927e8.slice/crio-b0f7fcf6d4b0a7c7656c6341e6d755ad397fe549745cbf74ed366892fc5bcd8e WatchSource:0}: Error finding container b0f7fcf6d4b0a7c7656c6341e6d755ad397fe549745cbf74ed366892fc5bcd8e: Status 404 returned error can't find the container with id b0f7fcf6d4b0a7c7656c6341e6d755ad397fe549745cbf74ed366892fc5bcd8e Jan 23 17:08:06 crc kubenswrapper[4606]: I0123 17:08:06.887719 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tmz28"] Jan 23 17:08:06 crc kubenswrapper[4606]: W0123 17:08:06.908416 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod222efa1d_6b61_49a8_8b6e_ce7276718806.slice/crio-029c04c3b31f0520e300e165767d9f4af2857603bd9fde32f9cd6180032047d1 WatchSource:0}: Error finding container 029c04c3b31f0520e300e165767d9f4af2857603bd9fde32f9cd6180032047d1: Status 404 returned error can't find the container with id 029c04c3b31f0520e300e165767d9f4af2857603bd9fde32f9cd6180032047d1 Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.106138 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"684acded-8efb-47ea-8bb4-0212874a6826","Type":"ContainerStarted","Data":"3490a9bc4f2d030e5c663808d7c423c1199a8d242faccc74c214b725af2d9935"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.107099 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmz28" event={"ID":"222efa1d-6b61-49a8-8b6e-ce7276718806","Type":"ContainerStarted","Data":"029c04c3b31f0520e300e165767d9f4af2857603bd9fde32f9cd6180032047d1"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.108618 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49d23d39-bb62-4d74-9053-c98f530d7208","Type":"ContainerStarted","Data":"2c91bd93b4144148fdf2a77435349fecfb89e0c1078803bdac620f88688f6b1f"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.110105 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf778ec6-6b7b-492e-948c-0314dc6b26ba","Type":"ContainerStarted","Data":"ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.111058 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14568d10-e2cf-4f22-9c72-62924ba927e8","Type":"ContainerStarted","Data":"b0f7fcf6d4b0a7c7656c6341e6d755ad397fe549745cbf74ed366892fc5bcd8e"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.111992 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0221a06d-cf70-4013-8ba4-41d8e7515bb3","Type":"ContainerStarted","Data":"ee3aba784213fb6901a90b650474442ed4a608cc644e5e01d4ae61c8ea66d373"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.117090 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8","Type":"ContainerStarted","Data":"6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.118565 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5sjkh" event={"ID":"7063b991-5662-4335-9e9e-73f057367708","Type":"ContainerStarted","Data":"50a827117ef6c69e52cea8cb07bf04eed4cfb674870bb2b587d595ac050d2451"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.120461 4606 generic.go:334] "Generic (PLEG): container finished" podID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerID="219711c9206e8907a0ff4b4800dc1a635cea574aa0f7363f002261dbb3aa159e" exitCode=0 Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.120617 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" event={"ID":"4139c6c4-3725-4dbc-931c-8d8ccab53974","Type":"ContainerDied","Data":"219711c9206e8907a0ff4b4800dc1a635cea574aa0f7363f002261dbb3aa159e"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.125129 4606 generic.go:334] "Generic (PLEG): container finished" podID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerID="d87a3fdbebce3cf7c99dcea7c659025da4ba40f056f52986ddc22dd970fb9120" exitCode=0 Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.125170 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" event={"ID":"fa0b5879-6896-403b-8565-7399b44e9ee2","Type":"ContainerDied","Data":"d87a3fdbebce3cf7c99dcea7c659025da4ba40f056f52986ddc22dd970fb9120"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.125220 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" event={"ID":"fa0b5879-6896-403b-8565-7399b44e9ee2","Type":"ContainerStarted","Data":"8c66fa3fce5ee2bffccc14a43bca4ba576c887cd2355d72cee0fa01e40ce5a96"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.126185 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.127423 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030","Type":"ContainerStarted","Data":"e75b6b74a0965d140038448945c2e45c1af5a151e5c3f4e7556749fa0b60c5ea"} Jan 23 17:08:07 crc kubenswrapper[4606]: I0123 17:08:07.153545 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" podStartSLOduration=3.041929721 podStartE2EDuration="21.153529318s" podCreationTimestamp="2026-01-23 17:07:46 +0000 UTC" firstStartedPulling="2026-01-23 17:07:47.569361969 +0000 UTC m=+901.569554375" lastFinishedPulling="2026-01-23 17:08:05.680961566 +0000 UTC m=+919.681153972" observedRunningTime="2026-01-23 17:08:07.149929669 +0000 UTC m=+921.150122075" watchObservedRunningTime="2026-01-23 17:08:07.153529318 +0000 UTC m=+921.153721724" Jan 23 17:08:11 crc kubenswrapper[4606]: I0123 17:08:11.845748 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:08:11 crc kubenswrapper[4606]: I0123 17:08:11.953373 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2cgtz"] Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.182151 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c5ffebdb-76b6-40da-b0e0-b48d596fdea8","Type":"ContainerStarted","Data":"d63da1827546aeb539ea3d506ac7dac4f3e721d3a85ac8a7ba4df9083a9bf14c"} Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.183482 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.218759 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.456644605 podStartE2EDuration="22.218743337s" podCreationTimestamp="2026-01-23 17:07:50 +0000 UTC" firstStartedPulling="2026-01-23 17:08:05.636471858 +0000 UTC m=+919.636664264" lastFinishedPulling="2026-01-23 17:08:11.39857059 +0000 UTC m=+925.398762996" observedRunningTime="2026-01-23 17:08:12.2151581 +0000 UTC m=+926.215350506" watchObservedRunningTime="2026-01-23 17:08:12.218743337 +0000 UTC m=+926.218935733" Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.224327 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" event={"ID":"4139c6c4-3725-4dbc-931c-8d8ccab53974","Type":"ContainerStarted","Data":"0ee2f05e8a741725b65790869e3c82ccb83219ed0176504e72048d727012dea7"} Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.225184 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.228697 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030","Type":"ContainerStarted","Data":"a6e65752b83b140df6b55ee5695fa7f4c927ea48ab145d419d40b8ecd370fb0c"} Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.237339 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"684acded-8efb-47ea-8bb4-0212874a6826","Type":"ContainerStarted","Data":"466d37fbe90586d252e7bd578c0be303dda5135f62c42334e4bde960f467509b"} Jan 23 17:08:12 crc kubenswrapper[4606]: I0123 17:08:12.300393 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" podStartSLOduration=7.989312795 podStartE2EDuration="26.300377823s" podCreationTimestamp="2026-01-23 17:07:46 +0000 UTC" firstStartedPulling="2026-01-23 17:07:47.442882173 +0000 UTC m=+901.443074579" lastFinishedPulling="2026-01-23 17:08:05.753947201 +0000 UTC m=+919.754139607" observedRunningTime="2026-01-23 17:08:12.258062999 +0000 UTC m=+926.258255415" watchObservedRunningTime="2026-01-23 17:08:12.300377823 +0000 UTC m=+926.300570229" Jan 23 17:08:13 crc kubenswrapper[4606]: I0123 17:08:13.245572 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerName="dnsmasq-dns" containerID="cri-o://0ee2f05e8a741725b65790869e3c82ccb83219ed0176504e72048d727012dea7" gracePeriod=10 Jan 23 17:08:14 crc kubenswrapper[4606]: I0123 17:08:14.269060 4606 generic.go:334] "Generic (PLEG): container finished" podID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerID="0ee2f05e8a741725b65790869e3c82ccb83219ed0176504e72048d727012dea7" exitCode=0 Jan 23 17:08:14 crc kubenswrapper[4606]: I0123 17:08:14.269147 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" event={"ID":"4139c6c4-3725-4dbc-931c-8d8ccab53974","Type":"ContainerDied","Data":"0ee2f05e8a741725b65790869e3c82ccb83219ed0176504e72048d727012dea7"} Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.031960 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4mzqq"] Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.034495 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.049105 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4mzqq"] Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.170478 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-catalog-content\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.170533 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgkrg\" (UniqueName: \"kubernetes.io/projected/bca3a129-8ee0-4470-a96b-6d4844a0d063-kube-api-access-wgkrg\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.170833 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-utilities\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.272240 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-catalog-content\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.272299 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgkrg\" (UniqueName: \"kubernetes.io/projected/bca3a129-8ee0-4470-a96b-6d4844a0d063-kube-api-access-wgkrg\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.272767 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-catalog-content\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.272992 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-utilities\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.273358 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-utilities\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.305123 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgkrg\" (UniqueName: \"kubernetes.io/projected/bca3a129-8ee0-4470-a96b-6d4844a0d063-kube-api-access-wgkrg\") pod \"redhat-operators-4mzqq\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:17 crc kubenswrapper[4606]: I0123 17:08:17.366128 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.231643 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.324619 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" event={"ID":"4139c6c4-3725-4dbc-931c-8d8ccab53974","Type":"ContainerDied","Data":"6a8f2ac27b602918fd0c938272eb8cc8979e553f773ef5ca4c75b27db1901a7a"} Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.324716 4606 scope.go:117] "RemoveContainer" containerID="0ee2f05e8a741725b65790869e3c82ccb83219ed0176504e72048d727012dea7" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.325038 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.412985 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-dns-svc\") pod \"4139c6c4-3725-4dbc-931c-8d8ccab53974\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.413454 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-config\") pod \"4139c6c4-3725-4dbc-931c-8d8ccab53974\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.413507 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtz9\" (UniqueName: \"kubernetes.io/projected/4139c6c4-3725-4dbc-931c-8d8ccab53974-kube-api-access-dmtz9\") pod \"4139c6c4-3725-4dbc-931c-8d8ccab53974\" (UID: \"4139c6c4-3725-4dbc-931c-8d8ccab53974\") " Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.433979 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4139c6c4-3725-4dbc-931c-8d8ccab53974-kube-api-access-dmtz9" (OuterVolumeSpecName: "kube-api-access-dmtz9") pod "4139c6c4-3725-4dbc-931c-8d8ccab53974" (UID: "4139c6c4-3725-4dbc-931c-8d8ccab53974"). InnerVolumeSpecName "kube-api-access-dmtz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.470065 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4139c6c4-3725-4dbc-931c-8d8ccab53974" (UID: "4139c6c4-3725-4dbc-931c-8d8ccab53974"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.509816 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-config" (OuterVolumeSpecName: "config") pod "4139c6c4-3725-4dbc-931c-8d8ccab53974" (UID: "4139c6c4-3725-4dbc-931c-8d8ccab53974"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.514955 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.514988 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4139c6c4-3725-4dbc-931c-8d8ccab53974-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.514999 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtz9\" (UniqueName: \"kubernetes.io/projected/4139c6c4-3725-4dbc-931c-8d8ccab53974-kube-api-access-dmtz9\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.666879 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2cgtz"] Jan 23 17:08:18 crc kubenswrapper[4606]: I0123 17:08:18.673421 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2cgtz"] Jan 23 17:08:19 crc kubenswrapper[4606]: I0123 17:08:19.251556 4606 scope.go:117] "RemoveContainer" containerID="219711c9206e8907a0ff4b4800dc1a635cea574aa0f7363f002261dbb3aa159e" Jan 23 17:08:19 crc kubenswrapper[4606]: I0123 17:08:19.756367 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4mzqq"] Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.351227 4606 generic.go:334] "Generic (PLEG): container finished" podID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerID="5b3427d4b655c9e959f6c6f7ae0b949c181423c1fa0731b549d51f876d2cd80e" exitCode=0 Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.351289 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mzqq" event={"ID":"bca3a129-8ee0-4470-a96b-6d4844a0d063","Type":"ContainerDied","Data":"5b3427d4b655c9e959f6c6f7ae0b949c181423c1fa0731b549d51f876d2cd80e"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.351576 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mzqq" event={"ID":"bca3a129-8ee0-4470-a96b-6d4844a0d063","Type":"ContainerStarted","Data":"5e7368bd215f72334c157d221fa3134cd0fc9a300de2981fe8dc92dc58680aa6"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.354885 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0221a06d-cf70-4013-8ba4-41d8e7515bb3","Type":"ContainerStarted","Data":"5dcf0e455ef92bda18caef4ac18be5bef6476dd78d6263b835456f8e88058db0"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.356685 4606 generic.go:334] "Generic (PLEG): container finished" podID="684acded-8efb-47ea-8bb4-0212874a6826" containerID="466d37fbe90586d252e7bd578c0be303dda5135f62c42334e4bde960f467509b" exitCode=0 Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.356737 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"684acded-8efb-47ea-8bb4-0212874a6826","Type":"ContainerDied","Data":"466d37fbe90586d252e7bd578c0be303dda5135f62c42334e4bde960f467509b"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.358256 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmz28" event={"ID":"222efa1d-6b61-49a8-8b6e-ce7276718806","Type":"ContainerStarted","Data":"15b9c4e75ba1448c47f7aad7e296902b78659c8298415092be5d995c11ee4898"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.362954 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14568d10-e2cf-4f22-9c72-62924ba927e8","Type":"ContainerStarted","Data":"7dc4695f8213a6cabba690d67c41ee8bc368043d0075d312ee5580e54e3b0d63"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.378242 4606 generic.go:334] "Generic (PLEG): container finished" podID="b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030" containerID="a6e65752b83b140df6b55ee5695fa7f4c927ea48ab145d419d40b8ecd370fb0c" exitCode=0 Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.379990 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" path="/var/lib/kubelet/pods/4139c6c4-3725-4dbc-931c-8d8ccab53974/volumes" Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.381159 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030","Type":"ContainerDied","Data":"a6e65752b83b140df6b55ee5695fa7f4c927ea48ab145d419d40b8ecd370fb0c"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.384285 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5sjkh" event={"ID":"7063b991-5662-4335-9e9e-73f057367708","Type":"ContainerStarted","Data":"02c5db9f5ee8e3a2b0bbba0313c5ed33f30d582f32e59c83e173b642a4859e24"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.385041 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5sjkh" Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.388858 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49d23d39-bb62-4d74-9053-c98f530d7208","Type":"ContainerStarted","Data":"31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204"} Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.395461 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.432561 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5sjkh" podStartSLOduration=11.801416571 podStartE2EDuration="24.432536926s" podCreationTimestamp="2026-01-23 17:07:56 +0000 UTC" firstStartedPulling="2026-01-23 17:08:06.688343711 +0000 UTC m=+920.688536107" lastFinishedPulling="2026-01-23 17:08:19.319464056 +0000 UTC m=+933.319656462" observedRunningTime="2026-01-23 17:08:20.416932894 +0000 UTC m=+934.417125300" watchObservedRunningTime="2026-01-23 17:08:20.432536926 +0000 UTC m=+934.432729332" Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.480633 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.229185848 podStartE2EDuration="28.480611452s" podCreationTimestamp="2026-01-23 17:07:52 +0000 UTC" firstStartedPulling="2026-01-23 17:08:06.680074319 +0000 UTC m=+920.680266725" lastFinishedPulling="2026-01-23 17:08:19.931499923 +0000 UTC m=+933.931692329" observedRunningTime="2026-01-23 17:08:20.470442103 +0000 UTC m=+934.470634509" watchObservedRunningTime="2026-01-23 17:08:20.480611452 +0000 UTC m=+934.480803848" Jan 23 17:08:20 crc kubenswrapper[4606]: I0123 17:08:20.565196 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 17:08:21 crc kubenswrapper[4606]: I0123 17:08:21.423351 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"684acded-8efb-47ea-8bb4-0212874a6826","Type":"ContainerStarted","Data":"12d65ec89de143ddedba0dbde04e69a534f16a67857788a3634870ead402de77"} Jan 23 17:08:21 crc kubenswrapper[4606]: I0123 17:08:21.433563 4606 generic.go:334] "Generic (PLEG): container finished" podID="222efa1d-6b61-49a8-8b6e-ce7276718806" containerID="15b9c4e75ba1448c47f7aad7e296902b78659c8298415092be5d995c11ee4898" exitCode=0 Jan 23 17:08:21 crc kubenswrapper[4606]: I0123 17:08:21.434974 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmz28" event={"ID":"222efa1d-6b61-49a8-8b6e-ce7276718806","Type":"ContainerDied","Data":"15b9c4e75ba1448c47f7aad7e296902b78659c8298415092be5d995c11ee4898"} Jan 23 17:08:21 crc kubenswrapper[4606]: I0123 17:08:21.449941 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030","Type":"ContainerStarted","Data":"553656d6757392466093adffe17c457f3fe0e8f83e0217a637d6cd976b1bfb6f"} Jan 23 17:08:21 crc kubenswrapper[4606]: I0123 17:08:21.453616 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.350847247 podStartE2EDuration="34.453597896s" podCreationTimestamp="2026-01-23 17:07:47 +0000 UTC" firstStartedPulling="2026-01-23 17:08:06.348008998 +0000 UTC m=+920.348201404" lastFinishedPulling="2026-01-23 17:08:11.450759627 +0000 UTC m=+925.450952053" observedRunningTime="2026-01-23 17:08:21.445377615 +0000 UTC m=+935.445570031" watchObservedRunningTime="2026-01-23 17:08:21.453597896 +0000 UTC m=+935.453790302" Jan 23 17:08:21 crc kubenswrapper[4606]: I0123 17:08:21.711580 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-2cgtz" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.91:5353: i/o timeout" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.467219 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmz28" event={"ID":"222efa1d-6b61-49a8-8b6e-ce7276718806","Type":"ContainerStarted","Data":"d555c3367c743c092e91a008bc6a71b96582ed995c7169c3180f23753fabf8b7"} Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.479417 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mzqq" event={"ID":"bca3a129-8ee0-4470-a96b-6d4844a0d063","Type":"ContainerStarted","Data":"4d9cef0aaaac2469acefaccff1968512013bb76d241430d13daf872b9a84e5b7"} Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.484389 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.852067321 podStartE2EDuration="34.484371644s" podCreationTimestamp="2026-01-23 17:07:48 +0000 UTC" firstStartedPulling="2026-01-23 17:08:06.836263829 +0000 UTC m=+920.836456235" lastFinishedPulling="2026-01-23 17:08:11.468568152 +0000 UTC m=+925.468760558" observedRunningTime="2026-01-23 17:08:21.505508206 +0000 UTC m=+935.505700612" watchObservedRunningTime="2026-01-23 17:08:22.484371644 +0000 UTC m=+936.484564050" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.492464 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gsfxc"] Jan 23 17:08:22 crc kubenswrapper[4606]: E0123 17:08:22.492822 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerName="init" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.492843 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerName="init" Jan 23 17:08:22 crc kubenswrapper[4606]: E0123 17:08:22.492866 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerName="dnsmasq-dns" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.492874 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerName="dnsmasq-dns" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.493024 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="4139c6c4-3725-4dbc-931c-8d8ccab53974" containerName="dnsmasq-dns" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.493852 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.519729 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gsfxc"] Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.592907 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.592996 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-config\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.594576 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrrm\" (UniqueName: \"kubernetes.io/projected/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-kube-api-access-vlrrm\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.695646 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrrm\" (UniqueName: \"kubernetes.io/projected/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-kube-api-access-vlrrm\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.695728 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.695757 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-config\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.697195 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-config\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.697212 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.723409 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrrm\" (UniqueName: \"kubernetes.io/projected/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-kube-api-access-vlrrm\") pod \"dnsmasq-dns-7cb5889db5-gsfxc\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:22 crc kubenswrapper[4606]: I0123 17:08:22.816935 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.299380 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gsfxc"] Jan 23 17:08:23 crc kubenswrapper[4606]: W0123 17:08:23.305047 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643935af_0e1e_4d4c_9ffe_f4d8674e0f20.slice/crio-4b0ef6de74a80cd02e72736451efe4e7852b60c91eef1e9a88e87df31253d8ea WatchSource:0}: Error finding container 4b0ef6de74a80cd02e72736451efe4e7852b60c91eef1e9a88e87df31253d8ea: Status 404 returned error can't find the container with id 4b0ef6de74a80cd02e72736451efe4e7852b60c91eef1e9a88e87df31253d8ea Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.493409 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tmz28" event={"ID":"222efa1d-6b61-49a8-8b6e-ce7276718806","Type":"ContainerStarted","Data":"253e4ca8e648c285f30f7ce2e0bf61517332eb9f56766b894766681c2a6391ce"} Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.494928 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.494950 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.503513 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" event={"ID":"643935af-0e1e-4d4c-9ffe-f4d8674e0f20","Type":"ContainerStarted","Data":"4b0ef6de74a80cd02e72736451efe4e7852b60c91eef1e9a88e87df31253d8ea"} Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.513053 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tmz28" podStartSLOduration=15.249965608 podStartE2EDuration="27.513035861s" podCreationTimestamp="2026-01-23 17:07:56 +0000 UTC" firstStartedPulling="2026-01-23 17:08:06.913314963 +0000 UTC m=+920.913507359" lastFinishedPulling="2026-01-23 17:08:19.176385206 +0000 UTC m=+933.176577612" observedRunningTime="2026-01-23 17:08:23.510754514 +0000 UTC m=+937.510946940" watchObservedRunningTime="2026-01-23 17:08:23.513035861 +0000 UTC m=+937.513228277" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.644698 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.677789 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.681361 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.682657 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-r67dr" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.683186 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.683291 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.695741 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.811713 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.811759 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/06f54b44-852a-4700-ac26-e9d89a545f28-lock\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.811822 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt578\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-kube-api-access-kt578\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.811857 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/06f54b44-852a-4700-ac26-e9d89a545f28-cache\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.811899 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f54b44-852a-4700-ac26-e9d89a545f28-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.811921 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.913646 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: E0123 17:08:23.913917 4606 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 17:08:23 crc kubenswrapper[4606]: E0123 17:08:23.914029 4606 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 17:08:23 crc kubenswrapper[4606]: E0123 17:08:23.914097 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift podName:06f54b44-852a-4700-ac26-e9d89a545f28 nodeName:}" failed. No retries permitted until 2026-01-23 17:08:24.414071817 +0000 UTC m=+938.414264263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift") pod "swift-storage-0" (UID: "06f54b44-852a-4700-ac26-e9d89a545f28") : configmap "swift-ring-files" not found Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.913990 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/06f54b44-852a-4700-ac26-e9d89a545f28-lock\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.914335 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt578\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-kube-api-access-kt578\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.914414 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/06f54b44-852a-4700-ac26-e9d89a545f28-cache\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.914515 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f54b44-852a-4700-ac26-e9d89a545f28-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.914549 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.914796 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/06f54b44-852a-4700-ac26-e9d89a545f28-lock\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.914973 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.915092 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/06f54b44-852a-4700-ac26-e9d89a545f28-cache\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.924263 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f54b44-852a-4700-ac26-e9d89a545f28-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.942158 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt578\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-kube-api-access-kt578\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:23 crc kubenswrapper[4606]: I0123 17:08:23.948310 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:24 crc kubenswrapper[4606]: I0123 17:08:24.421346 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:24 crc kubenswrapper[4606]: E0123 17:08:24.421543 4606 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 17:08:24 crc kubenswrapper[4606]: E0123 17:08:24.421565 4606 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 17:08:24 crc kubenswrapper[4606]: E0123 17:08:24.421635 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift podName:06f54b44-852a-4700-ac26-e9d89a545f28 nodeName:}" failed. No retries permitted until 2026-01-23 17:08:25.42161193 +0000 UTC m=+939.421804346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift") pod "swift-storage-0" (UID: "06f54b44-852a-4700-ac26-e9d89a545f28") : configmap "swift-ring-files" not found Jan 23 17:08:24 crc kubenswrapper[4606]: I0123 17:08:24.511255 4606 generic.go:334] "Generic (PLEG): container finished" podID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerID="4d9cef0aaaac2469acefaccff1968512013bb76d241430d13daf872b9a84e5b7" exitCode=0 Jan 23 17:08:24 crc kubenswrapper[4606]: I0123 17:08:24.511356 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mzqq" event={"ID":"bca3a129-8ee0-4470-a96b-6d4844a0d063","Type":"ContainerDied","Data":"4d9cef0aaaac2469acefaccff1968512013bb76d241430d13daf872b9a84e5b7"} Jan 23 17:08:25 crc kubenswrapper[4606]: I0123 17:08:25.437494 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:25 crc kubenswrapper[4606]: E0123 17:08:25.437727 4606 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 17:08:25 crc kubenswrapper[4606]: E0123 17:08:25.438030 4606 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 17:08:25 crc kubenswrapper[4606]: E0123 17:08:25.438094 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift podName:06f54b44-852a-4700-ac26-e9d89a545f28 nodeName:}" failed. No retries permitted until 2026-01-23 17:08:27.438071357 +0000 UTC m=+941.438263803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift") pod "swift-storage-0" (UID: "06f54b44-852a-4700-ac26-e9d89a545f28") : configmap "swift-ring-files" not found Jan 23 17:08:26 crc kubenswrapper[4606]: I0123 17:08:26.530438 4606 generic.go:334] "Generic (PLEG): container finished" podID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerID="998f7b3faf0278adce835b838c61cd688e913ea5f7187613347b092b43a04b10" exitCode=0 Jan 23 17:08:26 crc kubenswrapper[4606]: I0123 17:08:26.530497 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" event={"ID":"643935af-0e1e-4d4c-9ffe-f4d8674e0f20","Type":"ContainerDied","Data":"998f7b3faf0278adce835b838c61cd688e913ea5f7187613347b092b43a04b10"} Jan 23 17:08:27 crc kubenswrapper[4606]: E0123 17:08:27.175053 4606 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.22:34106->38.102.83.22:38857: write tcp 38.102.83.22:34106->38.102.83.22:38857: write: broken pipe Jan 23 17:08:27 crc kubenswrapper[4606]: E0123 17:08:27.470194 4606 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 17:08:27 crc kubenswrapper[4606]: E0123 17:08:27.470220 4606 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 17:08:27 crc kubenswrapper[4606]: E0123 17:08:27.470256 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift podName:06f54b44-852a-4700-ac26-e9d89a545f28 nodeName:}" failed. No retries permitted until 2026-01-23 17:08:31.470243294 +0000 UTC m=+945.470435690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift") pod "swift-storage-0" (UID: "06f54b44-852a-4700-ac26-e9d89a545f28") : configmap "swift-ring-files" not found Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.470531 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.577932 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5h5kn"] Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.581571 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.583105 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.587750 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.587908 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.601429 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5h5kn"] Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.673259 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-swiftconf\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.673323 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-combined-ca-bundle\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.673348 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-ring-data-devices\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.673391 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gclkv\" (UniqueName: \"kubernetes.io/projected/f26de21c-1da5-42d4-9dcd-0e6415109f94-kube-api-access-gclkv\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.673426 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-scripts\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.673472 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-dispersionconf\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.673504 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f26de21c-1da5-42d4-9dcd-0e6415109f94-etc-swift\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.774868 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-ring-data-devices\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.775660 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gclkv\" (UniqueName: \"kubernetes.io/projected/f26de21c-1da5-42d4-9dcd-0e6415109f94-kube-api-access-gclkv\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.775730 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-scripts\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.775806 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-dispersionconf\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.775849 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f26de21c-1da5-42d4-9dcd-0e6415109f94-etc-swift\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.775905 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-swiftconf\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.775953 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-combined-ca-bundle\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.776711 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-ring-data-devices\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.776843 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f26de21c-1da5-42d4-9dcd-0e6415109f94-etc-swift\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.777161 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-scripts\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.779944 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-swiftconf\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.780853 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-combined-ca-bundle\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.793246 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gclkv\" (UniqueName: \"kubernetes.io/projected/f26de21c-1da5-42d4-9dcd-0e6415109f94-kube-api-access-gclkv\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.793426 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-dispersionconf\") pod \"swift-ring-rebalance-5h5kn\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:27 crc kubenswrapper[4606]: I0123 17:08:27.937877 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:08:28 crc kubenswrapper[4606]: I0123 17:08:28.932021 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 17:08:28 crc kubenswrapper[4606]: I0123 17:08:28.932671 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 17:08:28 crc kubenswrapper[4606]: I0123 17:08:28.994023 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5h5kn"] Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.356447 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.658957 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" event={"ID":"643935af-0e1e-4d4c-9ffe-f4d8674e0f20","Type":"ContainerStarted","Data":"fc4815cd234e4c1092de90a24836b4f728db175c93b67ece6674baa2242e86dd"} Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.659484 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.684866 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5h5kn" event={"ID":"f26de21c-1da5-42d4-9dcd-0e6415109f94","Type":"ContainerStarted","Data":"bc67454d5e888972dc252cfbc8299046125fa1f525a98c7c21ff40a1c5cb6c8f"} Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.690094 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" podStartSLOduration=7.690079459 podStartE2EDuration="7.690079459s" podCreationTimestamp="2026-01-23 17:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:29.685122789 +0000 UTC m=+943.685315195" watchObservedRunningTime="2026-01-23 17:08:29.690079459 +0000 UTC m=+943.690271865" Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.726157 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mzqq" event={"ID":"bca3a129-8ee0-4470-a96b-6d4844a0d063","Type":"ContainerStarted","Data":"9fa02fcb13bc86cf374223be1095f72aef2f05c0ff3add716e8038ce77bd7df5"} Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.741706 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14568d10-e2cf-4f22-9c72-62924ba927e8","Type":"ContainerStarted","Data":"fb3ffd3ac7b83faf9adcaf903051e9e2a9eb6ccecf706b0fa63497e517b01072"} Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.757294 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4mzqq" podStartSLOduration=4.545175495 podStartE2EDuration="12.757275883s" podCreationTimestamp="2026-01-23 17:08:17 +0000 UTC" firstStartedPulling="2026-01-23 17:08:20.353902363 +0000 UTC m=+934.354094769" lastFinishedPulling="2026-01-23 17:08:28.566002751 +0000 UTC m=+942.566195157" observedRunningTime="2026-01-23 17:08:29.755871298 +0000 UTC m=+943.756063714" watchObservedRunningTime="2026-01-23 17:08:29.757275883 +0000 UTC m=+943.757468289" Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.760833 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0221a06d-cf70-4013-8ba4-41d8e7515bb3","Type":"ContainerStarted","Data":"ad529129930fecd6bc4ee6451d2e6aa2d6f585a396b7554cbdfeb0ff0c08579f"} Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.791368 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.064010774 podStartE2EDuration="32.791350897s" podCreationTimestamp="2026-01-23 17:07:57 +0000 UTC" firstStartedPulling="2026-01-23 17:08:06.857748014 +0000 UTC m=+920.857940420" lastFinishedPulling="2026-01-23 17:08:28.585088127 +0000 UTC m=+942.585280543" observedRunningTime="2026-01-23 17:08:29.786037897 +0000 UTC m=+943.786230313" watchObservedRunningTime="2026-01-23 17:08:29.791350897 +0000 UTC m=+943.791543303" Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.825913 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.090698746 podStartE2EDuration="34.825891091s" podCreationTimestamp="2026-01-23 17:07:55 +0000 UTC" firstStartedPulling="2026-01-23 17:08:06.830465157 +0000 UTC m=+920.830657563" lastFinishedPulling="2026-01-23 17:08:28.565657502 +0000 UTC m=+942.565849908" observedRunningTime="2026-01-23 17:08:29.823460822 +0000 UTC m=+943.823653228" watchObservedRunningTime="2026-01-23 17:08:29.825891091 +0000 UTC m=+943.826083497" Jan 23 17:08:29 crc kubenswrapper[4606]: I0123 17:08:29.989946 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.193319 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9e2b-account-create-update-fqlvx"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.194767 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.196554 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.258715 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.258970 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.259434 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9e2b-account-create-update-fqlvx"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.303210 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-btl5s"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.304418 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.326303 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-btl5s"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.330121 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-operator-scripts\") pod \"keystone-9e2b-account-create-update-fqlvx\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.330620 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4mjq\" (UniqueName: \"kubernetes.io/projected/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-kube-api-access-p4mjq\") pod \"keystone-9e2b-account-create-update-fqlvx\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.385990 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.433283 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4mjq\" (UniqueName: \"kubernetes.io/projected/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-kube-api-access-p4mjq\") pod \"keystone-9e2b-account-create-update-fqlvx\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.433369 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgnf2\" (UniqueName: \"kubernetes.io/projected/f37c097b-95e1-44fc-970a-a12f009ff813-kube-api-access-wgnf2\") pod \"keystone-db-create-btl5s\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.433396 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37c097b-95e1-44fc-970a-a12f009ff813-operator-scripts\") pod \"keystone-db-create-btl5s\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.433426 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-operator-scripts\") pod \"keystone-9e2b-account-create-update-fqlvx\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.434286 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-operator-scripts\") pod \"keystone-9e2b-account-create-update-fqlvx\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.466209 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4mjq\" (UniqueName: \"kubernetes.io/projected/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-kube-api-access-p4mjq\") pod \"keystone-9e2b-account-create-update-fqlvx\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.504801 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-457lj"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.506485 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.510509 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.513619 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-457lj"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.534220 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgnf2\" (UniqueName: \"kubernetes.io/projected/f37c097b-95e1-44fc-970a-a12f009ff813-kube-api-access-wgnf2\") pod \"keystone-db-create-btl5s\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.536118 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37c097b-95e1-44fc-970a-a12f009ff813-operator-scripts\") pod \"keystone-db-create-btl5s\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.537474 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37c097b-95e1-44fc-970a-a12f009ff813-operator-scripts\") pod \"keystone-db-create-btl5s\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.589887 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgnf2\" (UniqueName: \"kubernetes.io/projected/f37c097b-95e1-44fc-970a-a12f009ff813-kube-api-access-wgnf2\") pod \"keystone-db-create-btl5s\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.612935 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5106-account-create-update-5hqcd"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.613938 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.621847 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.622308 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.639434 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18804b6a-aa2a-48d6-a27d-16e4e55772c8-operator-scripts\") pod \"placement-db-create-457lj\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.639503 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4z2\" (UniqueName: \"kubernetes.io/projected/18804b6a-aa2a-48d6-a27d-16e4e55772c8-kube-api-access-wm4z2\") pod \"placement-db-create-457lj\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.644087 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5106-account-create-update-5hqcd"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.744146 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18804b6a-aa2a-48d6-a27d-16e4e55772c8-operator-scripts\") pod \"placement-db-create-457lj\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.744227 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm4z2\" (UniqueName: \"kubernetes.io/projected/18804b6a-aa2a-48d6-a27d-16e4e55772c8-kube-api-access-wm4z2\") pod \"placement-db-create-457lj\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.744291 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-operator-scripts\") pod \"placement-5106-account-create-update-5hqcd\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.744338 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wcn\" (UniqueName: \"kubernetes.io/projected/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-kube-api-access-s9wcn\") pod \"placement-5106-account-create-update-5hqcd\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.745124 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18804b6a-aa2a-48d6-a27d-16e4e55772c8-operator-scripts\") pod \"placement-db-create-457lj\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.779171 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm4z2\" (UniqueName: \"kubernetes.io/projected/18804b6a-aa2a-48d6-a27d-16e4e55772c8-kube-api-access-wm4z2\") pod \"placement-db-create-457lj\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.790261 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.790435 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.822368 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qqlz8"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.823553 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.831676 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qqlz8"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.835043 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-457lj" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.839150 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.846651 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-operator-scripts\") pod \"placement-5106-account-create-update-5hqcd\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.846752 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9wcn\" (UniqueName: \"kubernetes.io/projected/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-kube-api-access-s9wcn\") pod \"placement-5106-account-create-update-5hqcd\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.847384 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-operator-scripts\") pod \"placement-5106-account-create-update-5hqcd\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.896216 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9wcn\" (UniqueName: \"kubernetes.io/projected/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-kube-api-access-s9wcn\") pod \"placement-5106-account-create-update-5hqcd\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.936530 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.938646 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4427-account-create-update-qjx9w"] Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.939634 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.941339 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.955340 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 17:08:30 crc kubenswrapper[4606]: I0123 17:08:30.974563 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4427-account-create-update-qjx9w"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.053988 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk86f\" (UniqueName: \"kubernetes.io/projected/4fa4f942-7d08-468f-b85d-da2c3e2f9538-kube-api-access-vk86f\") pod \"glance-db-create-qqlz8\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.054427 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa4f942-7d08-468f-b85d-da2c3e2f9538-operator-scripts\") pod \"glance-db-create-qqlz8\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.072097 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.096172 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.165460 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jg5r\" (UniqueName: \"kubernetes.io/projected/7f83912b-7bd4-49dd-9584-4ea66a2405e2-kube-api-access-9jg5r\") pod \"glance-4427-account-create-update-qjx9w\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.165549 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk86f\" (UniqueName: \"kubernetes.io/projected/4fa4f942-7d08-468f-b85d-da2c3e2f9538-kube-api-access-vk86f\") pod \"glance-db-create-qqlz8\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.165688 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f83912b-7bd4-49dd-9584-4ea66a2405e2-operator-scripts\") pod \"glance-4427-account-create-update-qjx9w\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.165748 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa4f942-7d08-468f-b85d-da2c3e2f9538-operator-scripts\") pod \"glance-db-create-qqlz8\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.166453 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa4f942-7d08-468f-b85d-da2c3e2f9538-operator-scripts\") pod \"glance-db-create-qqlz8\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.183924 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk86f\" (UniqueName: \"kubernetes.io/projected/4fa4f942-7d08-468f-b85d-da2c3e2f9538-kube-api-access-vk86f\") pod \"glance-db-create-qqlz8\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.192189 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.266740 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jg5r\" (UniqueName: \"kubernetes.io/projected/7f83912b-7bd4-49dd-9584-4ea66a2405e2-kube-api-access-9jg5r\") pod \"glance-4427-account-create-update-qjx9w\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.266824 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f83912b-7bd4-49dd-9584-4ea66a2405e2-operator-scripts\") pod \"glance-4427-account-create-update-qjx9w\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.268237 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f83912b-7bd4-49dd-9584-4ea66a2405e2-operator-scripts\") pod \"glance-4427-account-create-update-qjx9w\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.306169 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jg5r\" (UniqueName: \"kubernetes.io/projected/7f83912b-7bd4-49dd-9584-4ea66a2405e2-kube-api-access-9jg5r\") pod \"glance-4427-account-create-update-qjx9w\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.335228 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9e2b-account-create-update-fqlvx"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.469309 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-btl5s"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.471683 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:31 crc kubenswrapper[4606]: E0123 17:08:31.476873 4606 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 17:08:31 crc kubenswrapper[4606]: E0123 17:08:31.476893 4606 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 17:08:31 crc kubenswrapper[4606]: E0123 17:08:31.476928 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift podName:06f54b44-852a-4700-ac26-e9d89a545f28 nodeName:}" failed. No retries permitted until 2026-01-23 17:08:39.476913787 +0000 UTC m=+953.477106193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift") pod "swift-storage-0" (UID: "06f54b44-852a-4700-ac26-e9d89a545f28") : configmap "swift-ring-files" not found Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.490426 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z5v24"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.503116 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.525580 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5v24"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.584416 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.616726 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-457lj"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.681836 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-utilities\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.682155 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnj2q\" (UniqueName: \"kubernetes.io/projected/87401ebd-0396-4ae2-9231-b2dbb28e308e-kube-api-access-mnj2q\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.682294 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-catalog-content\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.758854 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5106-account-create-update-5hqcd"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.784071 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-utilities\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.784109 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnj2q\" (UniqueName: \"kubernetes.io/projected/87401ebd-0396-4ae2-9231-b2dbb28e308e-kube-api-access-mnj2q\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.784197 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-catalog-content\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.784608 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-utilities\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.784703 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-catalog-content\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.817606 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnj2q\" (UniqueName: \"kubernetes.io/projected/87401ebd-0396-4ae2-9231-b2dbb28e308e-kube-api-access-mnj2q\") pod \"certified-operators-z5v24\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.855853 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5106-account-create-update-5hqcd" event={"ID":"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf","Type":"ContainerStarted","Data":"c8b606ccd39916b368538a48c7398ba6b94832de6dda59effc375ff9084651a4"} Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.868048 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-457lj" event={"ID":"18804b6a-aa2a-48d6-a27d-16e4e55772c8","Type":"ContainerStarted","Data":"08c40e28a4eca09fd44e88b0ba74c3fec79df9153a129011cdeb476199aa377c"} Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.879512 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e2b-account-create-update-fqlvx" event={"ID":"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d","Type":"ContainerStarted","Data":"c23acda3913c6695422060f03a2bab72b64a63f464400a07869227cdf6a3812c"} Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.879542 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e2b-account-create-update-fqlvx" event={"ID":"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d","Type":"ContainerStarted","Data":"4302ae8d6367792045fba5293f86da42cf636c2d75ad028e10e392c1e7f595ef"} Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.884418 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-btl5s" event={"ID":"f37c097b-95e1-44fc-970a-a12f009ff813","Type":"ContainerStarted","Data":"d4c65e081906767ea8d1ac46073f1583fa878b70363338247806325d39e84d19"} Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.885301 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.918416 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qqlz8"] Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.942762 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9e2b-account-create-update-fqlvx" podStartSLOduration=1.942740138 podStartE2EDuration="1.942740138s" podCreationTimestamp="2026-01-23 17:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:31.925713112 +0000 UTC m=+945.925905518" watchObservedRunningTime="2026-01-23 17:08:31.942740138 +0000 UTC m=+945.942932554" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.963856 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:31 crc kubenswrapper[4606]: I0123 17:08:31.979943 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-btl5s" podStartSLOduration=1.979921938 podStartE2EDuration="1.979921938s" podCreationTimestamp="2026-01-23 17:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:31.948285314 +0000 UTC m=+945.948477720" watchObservedRunningTime="2026-01-23 17:08:31.979921938 +0000 UTC m=+945.980114344" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.024922 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.025283 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.311941 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gsfxc"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.312328 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" podUID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerName="dnsmasq-dns" containerID="cri-o://fc4815cd234e4c1092de90a24836b4f728db175c93b67ece6674baa2242e86dd" gracePeriod=10 Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.390625 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-wkkkd"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.392083 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.398092 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.405920 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-grzgc"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.410460 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.419991 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-wkkkd"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.427002 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.478552 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-grzgc"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.500095 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4427-account-create-update-qjx9w"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.519861 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjvx\" (UniqueName: \"kubernetes.io/projected/bc6dd1de-854d-4048-84de-c1a639ec37fe-kube-api-access-8mjvx\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.519996 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520035 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6dd1de-854d-4048-84de-c1a639ec37fe-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520058 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6dd1de-854d-4048-84de-c1a639ec37fe-config\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520188 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc6dd1de-854d-4048-84de-c1a639ec37fe-ovn-rundir\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520309 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc6dd1de-854d-4048-84de-c1a639ec37fe-ovs-rundir\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520330 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxdp8\" (UniqueName: \"kubernetes.io/projected/278929af-12e7-492a-bbb9-c6abf5499349-kube-api-access-qxdp8\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520349 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-config\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520471 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6dd1de-854d-4048-84de-c1a639ec37fe-combined-ca-bundle\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.520615 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.566989 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626122 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc6dd1de-854d-4048-84de-c1a639ec37fe-ovs-rundir\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626177 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxdp8\" (UniqueName: \"kubernetes.io/projected/278929af-12e7-492a-bbb9-c6abf5499349-kube-api-access-qxdp8\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626210 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-config\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626259 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6dd1de-854d-4048-84de-c1a639ec37fe-combined-ca-bundle\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626383 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626432 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mjvx\" (UniqueName: \"kubernetes.io/projected/bc6dd1de-854d-4048-84de-c1a639ec37fe-kube-api-access-8mjvx\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626501 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626545 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6dd1de-854d-4048-84de-c1a639ec37fe-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626799 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6dd1de-854d-4048-84de-c1a639ec37fe-config\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.626900 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc6dd1de-854d-4048-84de-c1a639ec37fe-ovn-rundir\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.627670 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc6dd1de-854d-4048-84de-c1a639ec37fe-ovn-rundir\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.628845 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.630521 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6dd1de-854d-4048-84de-c1a639ec37fe-config\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.630540 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.631014 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc6dd1de-854d-4048-84de-c1a639ec37fe-ovs-rundir\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.641664 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-config\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.649846 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc6dd1de-854d-4048-84de-c1a639ec37fe-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.658076 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6dd1de-854d-4048-84de-c1a639ec37fe-combined-ca-bundle\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.793117 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.797321 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxdp8\" (UniqueName: \"kubernetes.io/projected/278929af-12e7-492a-bbb9-c6abf5499349-kube-api-access-qxdp8\") pod \"dnsmasq-dns-74f6f696b9-wkkkd\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.817057 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.833257 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mjvx\" (UniqueName: \"kubernetes.io/projected/bc6dd1de-854d-4048-84de-c1a639ec37fe-kube-api-access-8mjvx\") pod \"ovn-controller-metrics-grzgc\" (UID: \"bc6dd1de-854d-4048-84de-c1a639ec37fe\") " pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.838990 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.862422 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.862608 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.862716 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.873610 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lqw46" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.910363 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-wkkkd"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.911056 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.935640 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/965dca6f-f73d-4ee0-a511-91793448cc67-scripts\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.935728 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/965dca6f-f73d-4ee0-a511-91793448cc67-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.935767 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.935783 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzrh\" (UniqueName: \"kubernetes.io/projected/965dca6f-f73d-4ee0-a511-91793448cc67-kube-api-access-vfzrh\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.935807 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.935821 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.935846 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/965dca6f-f73d-4ee0-a511-91793448cc67-config\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.973781 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-2brrb"] Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.975253 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.979683 4606 generic.go:334] "Generic (PLEG): container finished" podID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerID="fc4815cd234e4c1092de90a24836b4f728db175c93b67ece6674baa2242e86dd" exitCode=0 Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.979748 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" event={"ID":"643935af-0e1e-4d4c-9ffe-f4d8674e0f20","Type":"ContainerDied","Data":"fc4815cd234e4c1092de90a24836b4f728db175c93b67ece6674baa2242e86dd"} Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.980852 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4427-account-create-update-qjx9w" event={"ID":"7f83912b-7bd4-49dd-9584-4ea66a2405e2","Type":"ContainerStarted","Data":"2d78f030b25a836b309eebfa4900837231ec3eb6c44f01f2969f6585163043ad"} Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.981760 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5106-account-create-update-5hqcd" event={"ID":"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf","Type":"ContainerStarted","Data":"bb7f06b01e7e5e09d8e4e53028c4353da5e2342cb9885ad66f52c65d0d270548"} Jan 23 17:08:32 crc kubenswrapper[4606]: I0123 17:08:32.990211 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-457lj" event={"ID":"18804b6a-aa2a-48d6-a27d-16e4e55772c8","Type":"ContainerStarted","Data":"b9deb28f85c4331045a754c0dc7ac2295114339c3e68181ecb457655349478af"} Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:32.997013 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.008034 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2brrb"] Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.019879 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qqlz8" event={"ID":"4fa4f942-7d08-468f-b85d-da2c3e2f9538","Type":"ContainerStarted","Data":"2b60c3c4f24f299b7be028bd22f8cbea2236f4676d38231d02c3793dc73cae90"} Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037179 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspbt\" (UniqueName: \"kubernetes.io/projected/d6a71de6-abb9-41ea-84ac-37aba42f4743-kube-api-access-zspbt\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037234 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/965dca6f-f73d-4ee0-a511-91793448cc67-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037275 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfzrh\" (UniqueName: \"kubernetes.io/projected/965dca6f-f73d-4ee0-a511-91793448cc67-kube-api-access-vfzrh\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037296 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037320 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037334 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037349 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037379 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/965dca6f-f73d-4ee0-a511-91793448cc67-config\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037436 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-config\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037458 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/965dca6f-f73d-4ee0-a511-91793448cc67-scripts\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037481 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.037500 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-dns-svc\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.038894 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/965dca6f-f73d-4ee0-a511-91793448cc67-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.049154 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/965dca6f-f73d-4ee0-a511-91793448cc67-config\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.050677 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/965dca6f-f73d-4ee0-a511-91793448cc67-scripts\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.051276 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-btl5s" event={"ID":"f37c097b-95e1-44fc-970a-a12f009ff813","Type":"ContainerStarted","Data":"e0cf457c58f3eb3910d1d31809dfc8d57ff9d3352906e28f3c4812f48b094a81"} Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.051733 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.070675 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.079401 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/965dca6f-f73d-4ee0-a511-91793448cc67-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.079828 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-grzgc" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.109736 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfzrh\" (UniqueName: \"kubernetes.io/projected/965dca6f-f73d-4ee0-a511-91793448cc67-kube-api-access-vfzrh\") pod \"ovn-northd-0\" (UID: \"965dca6f-f73d-4ee0-a511-91793448cc67\") " pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.149615 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zspbt\" (UniqueName: \"kubernetes.io/projected/d6a71de6-abb9-41ea-84ac-37aba42f4743-kube-api-access-zspbt\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.149731 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.149796 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-config\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.149832 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.149869 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-dns-svc\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.153045 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.154231 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-dns-svc\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.155533 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-config\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.162152 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.164782 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-457lj" podStartSLOduration=3.164764553 podStartE2EDuration="3.164764553s" podCreationTimestamp="2026-01-23 17:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:33.134937054 +0000 UTC m=+947.135129460" watchObservedRunningTime="2026-01-23 17:08:33.164764553 +0000 UTC m=+947.164956959" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.200977 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zspbt\" (UniqueName: \"kubernetes.io/projected/d6a71de6-abb9-41ea-84ac-37aba42f4743-kube-api-access-zspbt\") pod \"dnsmasq-dns-698758b865-2brrb\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.214019 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5106-account-create-update-5hqcd" podStartSLOduration=3.213996797 podStartE2EDuration="3.213996797s" podCreationTimestamp="2026-01-23 17:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:33.197015092 +0000 UTC m=+947.197207508" watchObservedRunningTime="2026-01-23 17:08:33.213996797 +0000 UTC m=+947.214189203" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.218623 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.334105 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.354470 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5v24"] Jan 23 17:08:33 crc kubenswrapper[4606]: I0123 17:08:33.874153 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-wkkkd"] Jan 23 17:08:34 crc kubenswrapper[4606]: I0123 17:08:34.091766 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerStarted","Data":"63923ac91c7ce34130e726f6f7045ba25bd609d73232d813db91db8e07f72244"} Jan 23 17:08:34 crc kubenswrapper[4606]: I0123 17:08:34.098064 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4427-account-create-update-qjx9w" event={"ID":"7f83912b-7bd4-49dd-9584-4ea66a2405e2","Type":"ContainerStarted","Data":"3a2507c4212bf20231472ac983f6a351db9035bc564b6fdb674d5cea519a7d84"} Jan 23 17:08:34 crc kubenswrapper[4606]: I0123 17:08:34.104284 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qqlz8" event={"ID":"4fa4f942-7d08-468f-b85d-da2c3e2f9538","Type":"ContainerStarted","Data":"df7eff4fb32434271a3179b69c570023bde89eabcf4fb741c3602715a1973513"} Jan 23 17:08:34 crc kubenswrapper[4606]: I0123 17:08:34.145119 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-qqlz8" podStartSLOduration=4.145102567 podStartE2EDuration="4.145102567s" podCreationTimestamp="2026-01-23 17:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:34.142473223 +0000 UTC m=+948.142665619" watchObservedRunningTime="2026-01-23 17:08:34.145102567 +0000 UTC m=+948.145294973" Jan 23 17:08:34 crc kubenswrapper[4606]: I0123 17:08:34.147193 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4427-account-create-update-qjx9w" podStartSLOduration=4.147186339 podStartE2EDuration="4.147186339s" podCreationTimestamp="2026-01-23 17:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:34.122236818 +0000 UTC m=+948.122429224" watchObservedRunningTime="2026-01-23 17:08:34.147186339 +0000 UTC m=+948.147378745" Jan 23 17:08:34 crc kubenswrapper[4606]: I0123 17:08:34.383995 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-grzgc"] Jan 23 17:08:34 crc kubenswrapper[4606]: I0123 17:08:34.384994 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.122124 4606 generic.go:334] "Generic (PLEG): container finished" podID="7f83912b-7bd4-49dd-9584-4ea66a2405e2" containerID="3a2507c4212bf20231472ac983f6a351db9035bc564b6fdb674d5cea519a7d84" exitCode=0 Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.122175 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4427-account-create-update-qjx9w" event={"ID":"7f83912b-7bd4-49dd-9584-4ea66a2405e2","Type":"ContainerDied","Data":"3a2507c4212bf20231472ac983f6a351db9035bc564b6fdb674d5cea519a7d84"} Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.130138 4606 generic.go:334] "Generic (PLEG): container finished" podID="dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf" containerID="bb7f06b01e7e5e09d8e4e53028c4353da5e2342cb9885ad66f52c65d0d270548" exitCode=0 Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.130228 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5106-account-create-update-5hqcd" event={"ID":"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf","Type":"ContainerDied","Data":"bb7f06b01e7e5e09d8e4e53028c4353da5e2342cb9885ad66f52c65d0d270548"} Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.132020 4606 generic.go:334] "Generic (PLEG): container finished" podID="18804b6a-aa2a-48d6-a27d-16e4e55772c8" containerID="b9deb28f85c4331045a754c0dc7ac2295114339c3e68181ecb457655349478af" exitCode=0 Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.132071 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-457lj" event={"ID":"18804b6a-aa2a-48d6-a27d-16e4e55772c8","Type":"ContainerDied","Data":"b9deb28f85c4331045a754c0dc7ac2295114339c3e68181ecb457655349478af"} Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.133403 4606 generic.go:334] "Generic (PLEG): container finished" podID="9d5e8555-29bb-44bc-a6ea-57cb81e8df4d" containerID="c23acda3913c6695422060f03a2bab72b64a63f464400a07869227cdf6a3812c" exitCode=0 Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.133438 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e2b-account-create-update-fqlvx" event={"ID":"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d","Type":"ContainerDied","Data":"c23acda3913c6695422060f03a2bab72b64a63f464400a07869227cdf6a3812c"} Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.135235 4606 generic.go:334] "Generic (PLEG): container finished" podID="4fa4f942-7d08-468f-b85d-da2c3e2f9538" containerID="df7eff4fb32434271a3179b69c570023bde89eabcf4fb741c3602715a1973513" exitCode=0 Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.135278 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qqlz8" event={"ID":"4fa4f942-7d08-468f-b85d-da2c3e2f9538","Type":"ContainerDied","Data":"df7eff4fb32434271a3179b69c570023bde89eabcf4fb741c3602715a1973513"} Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.144216 4606 generic.go:334] "Generic (PLEG): container finished" podID="f37c097b-95e1-44fc-970a-a12f009ff813" containerID="e0cf457c58f3eb3910d1d31809dfc8d57ff9d3352906e28f3c4812f48b094a81" exitCode=0 Jan 23 17:08:35 crc kubenswrapper[4606]: I0123 17:08:35.144271 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-btl5s" event={"ID":"f37c097b-95e1-44fc-970a-a12f009ff813","Type":"ContainerDied","Data":"e0cf457c58f3eb3910d1d31809dfc8d57ff9d3352906e28f3c4812f48b094a81"} Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.366505 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.366862 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:37 crc kubenswrapper[4606]: W0123 17:08:37.479310 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod965dca6f_f73d_4ee0_a511_91793448cc67.slice/crio-a78c44c872d9fd0349c886abca0d07195949ab1ba62fd4340ac2db4f7c4bd00c WatchSource:0}: Error finding container a78c44c872d9fd0349c886abca0d07195949ab1ba62fd4340ac2db4f7c4bd00c: Status 404 returned error can't find the container with id a78c44c872d9fd0349c886abca0d07195949ab1ba62fd4340ac2db4f7c4bd00c Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.579326 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xksgt"] Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.581844 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.585302 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.595524 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xksgt"] Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.760603 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ed41886-68c5-48a1-971f-279081891310-operator-scripts\") pod \"root-account-create-update-xksgt\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.760892 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhzf6\" (UniqueName: \"kubernetes.io/projected/4ed41886-68c5-48a1-971f-279081891310-kube-api-access-rhzf6\") pod \"root-account-create-update-xksgt\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.764358 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.829423 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.841509 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.844107 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.858336 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.863049 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-dns-svc\") pod \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.863120 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlrrm\" (UniqueName: \"kubernetes.io/projected/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-kube-api-access-vlrrm\") pod \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.863328 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-config\") pod \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\" (UID: \"643935af-0e1e-4d4c-9ffe-f4d8674e0f20\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.863914 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ed41886-68c5-48a1-971f-279081891310-operator-scripts\") pod \"root-account-create-update-xksgt\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.863971 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhzf6\" (UniqueName: \"kubernetes.io/projected/4ed41886-68c5-48a1-971f-279081891310-kube-api-access-rhzf6\") pod \"root-account-create-update-xksgt\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.870653 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.875269 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ed41886-68c5-48a1-971f-279081891310-operator-scripts\") pod \"root-account-create-update-xksgt\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.915442 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-kube-api-access-vlrrm" (OuterVolumeSpecName: "kube-api-access-vlrrm") pod "643935af-0e1e-4d4c-9ffe-f4d8674e0f20" (UID: "643935af-0e1e-4d4c-9ffe-f4d8674e0f20"). InnerVolumeSpecName "kube-api-access-vlrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.924544 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhzf6\" (UniqueName: \"kubernetes.io/projected/4ed41886-68c5-48a1-971f-279081891310-kube-api-access-rhzf6\") pod \"root-account-create-update-xksgt\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.958139 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-457lj" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.966964 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk86f\" (UniqueName: \"kubernetes.io/projected/4fa4f942-7d08-468f-b85d-da2c3e2f9538-kube-api-access-vk86f\") pod \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967485 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37c097b-95e1-44fc-970a-a12f009ff813-operator-scripts\") pod \"f37c097b-95e1-44fc-970a-a12f009ff813\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967536 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9wcn\" (UniqueName: \"kubernetes.io/projected/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-kube-api-access-s9wcn\") pod \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967656 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f83912b-7bd4-49dd-9584-4ea66a2405e2-operator-scripts\") pod \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967715 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgnf2\" (UniqueName: \"kubernetes.io/projected/f37c097b-95e1-44fc-970a-a12f009ff813-kube-api-access-wgnf2\") pod \"f37c097b-95e1-44fc-970a-a12f009ff813\" (UID: \"f37c097b-95e1-44fc-970a-a12f009ff813\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967773 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-operator-scripts\") pod \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\" (UID: \"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967823 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa4f942-7d08-468f-b85d-da2c3e2f9538-operator-scripts\") pod \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\" (UID: \"4fa4f942-7d08-468f-b85d-da2c3e2f9538\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967868 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-operator-scripts\") pod \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967884 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jg5r\" (UniqueName: \"kubernetes.io/projected/7f83912b-7bd4-49dd-9584-4ea66a2405e2-kube-api-access-9jg5r\") pod \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\" (UID: \"7f83912b-7bd4-49dd-9584-4ea66a2405e2\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.967950 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4mjq\" (UniqueName: \"kubernetes.io/projected/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-kube-api-access-p4mjq\") pod \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\" (UID: \"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d\") " Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.968374 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f37c097b-95e1-44fc-970a-a12f009ff813-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f37c097b-95e1-44fc-970a-a12f009ff813" (UID: "f37c097b-95e1-44fc-970a-a12f009ff813"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.968638 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlrrm\" (UniqueName: \"kubernetes.io/projected/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-kube-api-access-vlrrm\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.968653 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f37c097b-95e1-44fc-970a-a12f009ff813-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.969003 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa4f942-7d08-468f-b85d-da2c3e2f9538-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fa4f942-7d08-468f-b85d-da2c3e2f9538" (UID: "4fa4f942-7d08-468f-b85d-da2c3e2f9538"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.969580 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d5e8555-29bb-44bc-a6ea-57cb81e8df4d" (UID: "9d5e8555-29bb-44bc-a6ea-57cb81e8df4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.970269 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf" (UID: "dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.971104 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f83912b-7bd4-49dd-9584-4ea66a2405e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f83912b-7bd4-49dd-9584-4ea66a2405e2" (UID: "7f83912b-7bd4-49dd-9584-4ea66a2405e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.982225 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-kube-api-access-p4mjq" (OuterVolumeSpecName: "kube-api-access-p4mjq") pod "9d5e8555-29bb-44bc-a6ea-57cb81e8df4d" (UID: "9d5e8555-29bb-44bc-a6ea-57cb81e8df4d"). InnerVolumeSpecName "kube-api-access-p4mjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.990843 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f83912b-7bd4-49dd-9584-4ea66a2405e2-kube-api-access-9jg5r" (OuterVolumeSpecName: "kube-api-access-9jg5r") pod "7f83912b-7bd4-49dd-9584-4ea66a2405e2" (UID: "7f83912b-7bd4-49dd-9584-4ea66a2405e2"). InnerVolumeSpecName "kube-api-access-9jg5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.993885 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa4f942-7d08-468f-b85d-da2c3e2f9538-kube-api-access-vk86f" (OuterVolumeSpecName: "kube-api-access-vk86f") pod "4fa4f942-7d08-468f-b85d-da2c3e2f9538" (UID: "4fa4f942-7d08-468f-b85d-da2c3e2f9538"). InnerVolumeSpecName "kube-api-access-vk86f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.995701 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-kube-api-access-s9wcn" (OuterVolumeSpecName: "kube-api-access-s9wcn") pod "dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf" (UID: "dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf"). InnerVolumeSpecName "kube-api-access-s9wcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:37 crc kubenswrapper[4606]: I0123 17:08:37.996240 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37c097b-95e1-44fc-970a-a12f009ff813-kube-api-access-wgnf2" (OuterVolumeSpecName: "kube-api-access-wgnf2") pod "f37c097b-95e1-44fc-970a-a12f009ff813" (UID: "f37c097b-95e1-44fc-970a-a12f009ff813"). InnerVolumeSpecName "kube-api-access-wgnf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.069454 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18804b6a-aa2a-48d6-a27d-16e4e55772c8-operator-scripts\") pod \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.069703 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm4z2\" (UniqueName: \"kubernetes.io/projected/18804b6a-aa2a-48d6-a27d-16e4e55772c8-kube-api-access-wm4z2\") pod \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\" (UID: \"18804b6a-aa2a-48d6-a27d-16e4e55772c8\") " Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070157 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk86f\" (UniqueName: \"kubernetes.io/projected/4fa4f942-7d08-468f-b85d-da2c3e2f9538-kube-api-access-vk86f\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070174 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9wcn\" (UniqueName: \"kubernetes.io/projected/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-kube-api-access-s9wcn\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070184 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f83912b-7bd4-49dd-9584-4ea66a2405e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070192 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgnf2\" (UniqueName: \"kubernetes.io/projected/f37c097b-95e1-44fc-970a-a12f009ff813-kube-api-access-wgnf2\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070203 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070211 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa4f942-7d08-468f-b85d-da2c3e2f9538-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070222 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070229 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jg5r\" (UniqueName: \"kubernetes.io/projected/7f83912b-7bd4-49dd-9584-4ea66a2405e2-kube-api-access-9jg5r\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.070239 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4mjq\" (UniqueName: \"kubernetes.io/projected/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d-kube-api-access-p4mjq\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.071174 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18804b6a-aa2a-48d6-a27d-16e4e55772c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18804b6a-aa2a-48d6-a27d-16e4e55772c8" (UID: "18804b6a-aa2a-48d6-a27d-16e4e55772c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.074984 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18804b6a-aa2a-48d6-a27d-16e4e55772c8-kube-api-access-wm4z2" (OuterVolumeSpecName: "kube-api-access-wm4z2") pod "18804b6a-aa2a-48d6-a27d-16e4e55772c8" (UID: "18804b6a-aa2a-48d6-a27d-16e4e55772c8"). InnerVolumeSpecName "kube-api-access-wm4z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.131312 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "643935af-0e1e-4d4c-9ffe-f4d8674e0f20" (UID: "643935af-0e1e-4d4c-9ffe-f4d8674e0f20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.144955 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-config" (OuterVolumeSpecName: "config") pod "643935af-0e1e-4d4c-9ffe-f4d8674e0f20" (UID: "643935af-0e1e-4d4c-9ffe-f4d8674e0f20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.171570 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.171618 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm4z2\" (UniqueName: \"kubernetes.io/projected/18804b6a-aa2a-48d6-a27d-16e4e55772c8-kube-api-access-wm4z2\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.171629 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643935af-0e1e-4d4c-9ffe-f4d8674e0f20-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.171639 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18804b6a-aa2a-48d6-a27d-16e4e55772c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.186715 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-457lj" event={"ID":"18804b6a-aa2a-48d6-a27d-16e4e55772c8","Type":"ContainerDied","Data":"08c40e28a4eca09fd44e88b0ba74c3fec79df9153a129011cdeb476199aa377c"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.186750 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08c40e28a4eca09fd44e88b0ba74c3fec79df9153a129011cdeb476199aa377c" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.186803 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-457lj" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.206716 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2brrb"] Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.208080 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.210977 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" event={"ID":"278929af-12e7-492a-bbb9-c6abf5499349","Type":"ContainerStarted","Data":"62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.211078 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" event={"ID":"278929af-12e7-492a-bbb9-c6abf5499349","Type":"ContainerStarted","Data":"c3e4161a29acc40f68f9a7dbe62c1e77a17e86e86b5b7b9d21ed24d020ea1f8c"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.211235 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" podUID="278929af-12e7-492a-bbb9-c6abf5499349" containerName="init" containerID="cri-o://62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8" gracePeriod=10 Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.220446 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e2b-account-create-update-fqlvx" event={"ID":"9d5e8555-29bb-44bc-a6ea-57cb81e8df4d","Type":"ContainerDied","Data":"4302ae8d6367792045fba5293f86da42cf636c2d75ad028e10e392c1e7f595ef"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.220477 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4302ae8d6367792045fba5293f86da42cf636c2d75ad028e10e392c1e7f595ef" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.220526 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e2b-account-create-update-fqlvx" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.239092 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qqlz8" event={"ID":"4fa4f942-7d08-468f-b85d-da2c3e2f9538","Type":"ContainerDied","Data":"2b60c3c4f24f299b7be028bd22f8cbea2236f4676d38231d02c3793dc73cae90"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.239437 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b60c3c4f24f299b7be028bd22f8cbea2236f4676d38231d02c3793dc73cae90" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.239566 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qqlz8" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.256092 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-grzgc" event={"ID":"bc6dd1de-854d-4048-84de-c1a639ec37fe","Type":"ContainerStarted","Data":"513f11042e938be7a9ecb9d5abd343ab47847dbfaca74289dabedb80cc0132da"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.276781 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" event={"ID":"643935af-0e1e-4d4c-9ffe-f4d8674e0f20","Type":"ContainerDied","Data":"4b0ef6de74a80cd02e72736451efe4e7852b60c91eef1e9a88e87df31253d8ea"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.276830 4606 scope.go:117] "RemoveContainer" containerID="fc4815cd234e4c1092de90a24836b4f728db175c93b67ece6674baa2242e86dd" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.276832 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-gsfxc" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.282799 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"965dca6f-f73d-4ee0-a511-91793448cc67","Type":"ContainerStarted","Data":"a78c44c872d9fd0349c886abca0d07195949ab1ba62fd4340ac2db4f7c4bd00c"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.295906 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5106-account-create-update-5hqcd" event={"ID":"dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf","Type":"ContainerDied","Data":"c8b606ccd39916b368538a48c7398ba6b94832de6dda59effc375ff9084651a4"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.296020 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b606ccd39916b368538a48c7398ba6b94832de6dda59effc375ff9084651a4" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.296130 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5106-account-create-update-5hqcd" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.301632 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-btl5s" event={"ID":"f37c097b-95e1-44fc-970a-a12f009ff813","Type":"ContainerDied","Data":"d4c65e081906767ea8d1ac46073f1583fa878b70363338247806325d39e84d19"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.301682 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c65e081906767ea8d1ac46073f1583fa878b70363338247806325d39e84d19" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.301814 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-btl5s" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.315037 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-grzgc" podStartSLOduration=6.315014933 podStartE2EDuration="6.315014933s" podCreationTimestamp="2026-01-23 17:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:38.282128829 +0000 UTC m=+952.282321255" watchObservedRunningTime="2026-01-23 17:08:38.315014933 +0000 UTC m=+952.315207349" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.317484 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerStarted","Data":"c7714d8c96081fabb3e6bd24ca88749febf5c2777e2b5c604fc2715501708804"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.321775 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4427-account-create-update-qjx9w" event={"ID":"7f83912b-7bd4-49dd-9584-4ea66a2405e2","Type":"ContainerDied","Data":"2d78f030b25a836b309eebfa4900837231ec3eb6c44f01f2969f6585163043ad"} Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.321823 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d78f030b25a836b309eebfa4900837231ec3eb6c44f01f2969f6585163043ad" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.321887 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4427-account-create-update-qjx9w" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.365836 4606 scope.go:117] "RemoveContainer" containerID="998f7b3faf0278adce835b838c61cd688e913ea5f7187613347b092b43a04b10" Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.453712 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4mzqq" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="registry-server" probeResult="failure" output=< Jan 23 17:08:38 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 17:08:38 crc kubenswrapper[4606]: > Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.858477 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gsfxc"] Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.881076 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-gsfxc"] Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.914650 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xksgt"] Jan 23 17:08:38 crc kubenswrapper[4606]: I0123 17:08:38.984097 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.109637 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-dns-svc\") pod \"278929af-12e7-492a-bbb9-c6abf5499349\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.110027 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-ovsdbserver-nb\") pod \"278929af-12e7-492a-bbb9-c6abf5499349\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.110071 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxdp8\" (UniqueName: \"kubernetes.io/projected/278929af-12e7-492a-bbb9-c6abf5499349-kube-api-access-qxdp8\") pod \"278929af-12e7-492a-bbb9-c6abf5499349\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.110099 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-config\") pod \"278929af-12e7-492a-bbb9-c6abf5499349\" (UID: \"278929af-12e7-492a-bbb9-c6abf5499349\") " Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.115544 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278929af-12e7-492a-bbb9-c6abf5499349-kube-api-access-qxdp8" (OuterVolumeSpecName: "kube-api-access-qxdp8") pod "278929af-12e7-492a-bbb9-c6abf5499349" (UID: "278929af-12e7-492a-bbb9-c6abf5499349"). InnerVolumeSpecName "kube-api-access-qxdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.130854 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-config" (OuterVolumeSpecName: "config") pod "278929af-12e7-492a-bbb9-c6abf5499349" (UID: "278929af-12e7-492a-bbb9-c6abf5499349"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.146583 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "278929af-12e7-492a-bbb9-c6abf5499349" (UID: "278929af-12e7-492a-bbb9-c6abf5499349"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.184952 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "278929af-12e7-492a-bbb9-c6abf5499349" (UID: "278929af-12e7-492a-bbb9-c6abf5499349"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.212292 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.212318 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.212330 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxdp8\" (UniqueName: \"kubernetes.io/projected/278929af-12e7-492a-bbb9-c6abf5499349-kube-api-access-qxdp8\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.212340 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278929af-12e7-492a-bbb9-c6abf5499349-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:39 crc kubenswrapper[4606]: E0123 17:08:39.220935 4606 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb9493ec_ca68_4e7c_be90_eb6974f2f0b8.slice/crio-6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf778ec6_6b7b_492e_948c_0314dc6b26ba.slice/crio-conmon-ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a71de6_abb9_41ea_84ac_37aba42f4743.slice/crio-conmon-06e9736e4e65e47f907f4fb56e3fa37662afa945fed57db4cf8058cd30f7f56d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf778ec6_6b7b_492e_948c_0314dc6b26ba.slice/crio-ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a71de6_abb9_41ea_84ac_37aba42f4743.slice/crio-06e9736e4e65e47f907f4fb56e3fa37662afa945fed57db4cf8058cd30f7f56d.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.330967 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5h5kn" event={"ID":"f26de21c-1da5-42d4-9dcd-0e6415109f94","Type":"ContainerStarted","Data":"359d52a2fca1616da952813ece6f3fe4c9244ec7895f724563b3d52e6626e1f0"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.333734 4606 generic.go:334] "Generic (PLEG): container finished" podID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerID="c7714d8c96081fabb3e6bd24ca88749febf5c2777e2b5c604fc2715501708804" exitCode=0 Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.333789 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerDied","Data":"c7714d8c96081fabb3e6bd24ca88749febf5c2777e2b5c604fc2715501708804"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.335408 4606 generic.go:334] "Generic (PLEG): container finished" podID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerID="06e9736e4e65e47f907f4fb56e3fa37662afa945fed57db4cf8058cd30f7f56d" exitCode=0 Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.335477 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2brrb" event={"ID":"d6a71de6-abb9-41ea-84ac-37aba42f4743","Type":"ContainerDied","Data":"06e9736e4e65e47f907f4fb56e3fa37662afa945fed57db4cf8058cd30f7f56d"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.335533 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2brrb" event={"ID":"d6a71de6-abb9-41ea-84ac-37aba42f4743","Type":"ContainerStarted","Data":"59a812883bbb036319fbf5df0cde23a5d01cccf44033335b2cf4f0592b300cd4"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.336709 4606 generic.go:334] "Generic (PLEG): container finished" podID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerID="ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c" exitCode=0 Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.336769 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf778ec6-6b7b-492e-948c-0314dc6b26ba","Type":"ContainerDied","Data":"ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.341380 4606 generic.go:334] "Generic (PLEG): container finished" podID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerID="6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b" exitCode=0 Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.341468 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8","Type":"ContainerDied","Data":"6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.353492 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xksgt" event={"ID":"4ed41886-68c5-48a1-971f-279081891310","Type":"ContainerStarted","Data":"736cdaa555a5cc9faa3cb9b660727e9466e9f91be9a63c1da22acb0883ab9610"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.353547 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xksgt" event={"ID":"4ed41886-68c5-48a1-971f-279081891310","Type":"ContainerStarted","Data":"dddc5128171ced4919582c9ec6c637a08b8883edc5b8dbe221505ba7491a1f50"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.354970 4606 generic.go:334] "Generic (PLEG): container finished" podID="278929af-12e7-492a-bbb9-c6abf5499349" containerID="62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8" exitCode=0 Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.355051 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.356502 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" event={"ID":"278929af-12e7-492a-bbb9-c6abf5499349","Type":"ContainerDied","Data":"62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.356530 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-wkkkd" event={"ID":"278929af-12e7-492a-bbb9-c6abf5499349","Type":"ContainerDied","Data":"c3e4161a29acc40f68f9a7dbe62c1e77a17e86e86b5b7b9d21ed24d020ea1f8c"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.356547 4606 scope.go:117] "RemoveContainer" containerID="62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.358287 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-grzgc" event={"ID":"bc6dd1de-854d-4048-84de-c1a639ec37fe","Type":"ContainerStarted","Data":"dac6cd3ff11bb709c1c7dd4de628a63bab67e8414223112891963ed0ffb33d6c"} Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.399225 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5h5kn" podStartSLOduration=3.661381423 podStartE2EDuration="12.399205117s" podCreationTimestamp="2026-01-23 17:08:27 +0000 UTC" firstStartedPulling="2026-01-23 17:08:29.007161759 +0000 UTC m=+943.007354165" lastFinishedPulling="2026-01-23 17:08:37.744985453 +0000 UTC m=+951.745177859" observedRunningTime="2026-01-23 17:08:39.351207544 +0000 UTC m=+953.351399950" watchObservedRunningTime="2026-01-23 17:08:39.399205117 +0000 UTC m=+953.399397533" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.477190 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-wkkkd"] Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.488681 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-wkkkd"] Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.496084 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xksgt" podStartSLOduration=2.496068476 podStartE2EDuration="2.496068476s" podCreationTimestamp="2026-01-23 17:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:39.493285848 +0000 UTC m=+953.493478264" watchObservedRunningTime="2026-01-23 17:08:39.496068476 +0000 UTC m=+953.496260882" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.517901 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:39 crc kubenswrapper[4606]: E0123 17:08:39.518645 4606 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 17:08:39 crc kubenswrapper[4606]: E0123 17:08:39.518672 4606 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 17:08:39 crc kubenswrapper[4606]: E0123 17:08:39.518715 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift podName:06f54b44-852a-4700-ac26-e9d89a545f28 nodeName:}" failed. No retries permitted until 2026-01-23 17:08:55.518701309 +0000 UTC m=+969.518893715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift") pod "swift-storage-0" (UID: "06f54b44-852a-4700-ac26-e9d89a545f28") : configmap "swift-ring-files" not found Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.655686 4606 scope.go:117] "RemoveContainer" containerID="62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8" Jan 23 17:08:39 crc kubenswrapper[4606]: E0123 17:08:39.656525 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8\": container with ID starting with 62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8 not found: ID does not exist" containerID="62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8" Jan 23 17:08:39 crc kubenswrapper[4606]: I0123 17:08:39.656572 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8"} err="failed to get container status \"62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8\": rpc error: code = NotFound desc = could not find container \"62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8\": container with ID starting with 62864eb12d93fc407d20700f7f41cddcd3ba15b91cd6341451be86f4786b03f8 not found: ID does not exist" Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.376916 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278929af-12e7-492a-bbb9-c6abf5499349" path="/var/lib/kubelet/pods/278929af-12e7-492a-bbb9-c6abf5499349/volumes" Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.377860 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" path="/var/lib/kubelet/pods/643935af-0e1e-4d4c-9ffe-f4d8674e0f20/volumes" Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378528 4606 generic.go:334] "Generic (PLEG): container finished" podID="4ed41886-68c5-48a1-971f-279081891310" containerID="736cdaa555a5cc9faa3cb9b660727e9466e9f91be9a63c1da22acb0883ab9610" exitCode=0 Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378546 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf778ec6-6b7b-492e-948c-0314dc6b26ba","Type":"ContainerStarted","Data":"bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f"} Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378571 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378594 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8","Type":"ContainerStarted","Data":"5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5"} Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378606 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerStarted","Data":"c4c1c37d6c5caff466e784866d9f43e3b95a48ec083ddc2d0e910d42b3c9f9b8"} Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378618 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2brrb" event={"ID":"d6a71de6-abb9-41ea-84ac-37aba42f4743","Type":"ContainerStarted","Data":"ff7ce9c8744f26e5614493c694422e9a094ab9e5f86648842927cca884f0226e"} Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378628 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xksgt" event={"ID":"4ed41886-68c5-48a1-971f-279081891310","Type":"ContainerDied","Data":"736cdaa555a5cc9faa3cb9b660727e9466e9f91be9a63c1da22acb0883ab9610"} Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.378725 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.380367 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"965dca6f-f73d-4ee0-a511-91793448cc67","Type":"ContainerStarted","Data":"a3be9c9b583b84da3be6349ce0ae39e2c654fa65e908b75a5ab8726253f3750b"} Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.491503 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.951335236 podStartE2EDuration="54.491482409s" podCreationTimestamp="2026-01-23 17:07:46 +0000 UTC" firstStartedPulling="2026-01-23 17:07:54.231244144 +0000 UTC m=+908.231436550" lastFinishedPulling="2026-01-23 17:08:05.771391317 +0000 UTC m=+919.771583723" observedRunningTime="2026-01-23 17:08:40.483412241 +0000 UTC m=+954.483604657" watchObservedRunningTime="2026-01-23 17:08:40.491482409 +0000 UTC m=+954.491674815" Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.500051 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-2brrb" podStartSLOduration=8.500021117 podStartE2EDuration="8.500021117s" podCreationTimestamp="2026-01-23 17:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:40.438634956 +0000 UTC m=+954.438827382" watchObservedRunningTime="2026-01-23 17:08:40.500021117 +0000 UTC m=+954.500213523" Jan 23 17:08:40 crc kubenswrapper[4606]: I0123 17:08:40.596368 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.081533147 podStartE2EDuration="54.596352283s" podCreationTimestamp="2026-01-23 17:07:46 +0000 UTC" firstStartedPulling="2026-01-23 17:07:54.233659321 +0000 UTC m=+908.233851727" lastFinishedPulling="2026-01-23 17:08:05.748478457 +0000 UTC m=+919.748670863" observedRunningTime="2026-01-23 17:08:40.575014602 +0000 UTC m=+954.575207008" watchObservedRunningTime="2026-01-23 17:08:40.596352283 +0000 UTC m=+954.596544689" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.262995 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5vzbb"] Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263469 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37c097b-95e1-44fc-970a-a12f009ff813" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263482 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37c097b-95e1-44fc-970a-a12f009ff813" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263496 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f83912b-7bd4-49dd-9584-4ea66a2405e2" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263502 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f83912b-7bd4-49dd-9584-4ea66a2405e2" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263515 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa4f942-7d08-468f-b85d-da2c3e2f9538" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263522 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa4f942-7d08-468f-b85d-da2c3e2f9538" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263535 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerName="init" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263540 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerName="init" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263547 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5e8555-29bb-44bc-a6ea-57cb81e8df4d" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263552 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5e8555-29bb-44bc-a6ea-57cb81e8df4d" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263564 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263570 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263578 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278929af-12e7-492a-bbb9-c6abf5499349" containerName="init" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263586 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="278929af-12e7-492a-bbb9-c6abf5499349" containerName="init" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263611 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18804b6a-aa2a-48d6-a27d-16e4e55772c8" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263617 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="18804b6a-aa2a-48d6-a27d-16e4e55772c8" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: E0123 17:08:41.263637 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerName="dnsmasq-dns" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263657 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerName="dnsmasq-dns" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263816 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="278929af-12e7-492a-bbb9-c6abf5499349" containerName="init" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263825 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37c097b-95e1-44fc-970a-a12f009ff813" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263837 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263846 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa4f942-7d08-468f-b85d-da2c3e2f9538" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263855 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f83912b-7bd4-49dd-9584-4ea66a2405e2" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263864 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="643935af-0e1e-4d4c-9ffe-f4d8674e0f20" containerName="dnsmasq-dns" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263879 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="18804b6a-aa2a-48d6-a27d-16e4e55772c8" containerName="mariadb-database-create" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.263888 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5e8555-29bb-44bc-a6ea-57cb81e8df4d" containerName="mariadb-account-create-update" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.264356 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.267611 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wjzzh" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.267647 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.282853 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5vzbb"] Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.350068 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-config-data\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.350136 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-db-sync-config-data\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.350183 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg228\" (UniqueName: \"kubernetes.io/projected/5b1b5e49-bdba-4463-b025-fc30bedd63f0-kube-api-access-wg228\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.350267 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-combined-ca-bundle\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.389263 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"965dca6f-f73d-4ee0-a511-91793448cc67","Type":"ContainerStarted","Data":"6338571a0d0f16262bf1f75201d3ebe3e235027c7441a0c0b693d7d1533fca83"} Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.389342 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.391150 4606 generic.go:334] "Generic (PLEG): container finished" podID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerID="c4c1c37d6c5caff466e784866d9f43e3b95a48ec083ddc2d0e910d42b3c9f9b8" exitCode=0 Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.391248 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerDied","Data":"c4c1c37d6c5caff466e784866d9f43e3b95a48ec083ddc2d0e910d42b3c9f9b8"} Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.413348 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r4l24"] Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.415145 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.451023 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4l24"] Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.452186 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-combined-ca-bundle\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.452828 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-config-data\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.452854 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-db-sync-config-data\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.452876 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg228\" (UniqueName: \"kubernetes.io/projected/5b1b5e49-bdba-4463-b025-fc30bedd63f0-kube-api-access-wg228\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.453289 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.201134262 podStartE2EDuration="9.453276019s" podCreationTimestamp="2026-01-23 17:08:32 +0000 UTC" firstStartedPulling="2026-01-23 17:08:37.484662156 +0000 UTC m=+951.484854552" lastFinishedPulling="2026-01-23 17:08:39.736803903 +0000 UTC m=+953.736996309" observedRunningTime="2026-01-23 17:08:41.449184289 +0000 UTC m=+955.449376705" watchObservedRunningTime="2026-01-23 17:08:41.453276019 +0000 UTC m=+955.453468415" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.463902 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-combined-ca-bundle\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.466189 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-db-sync-config-data\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.501159 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-config-data\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.514813 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg228\" (UniqueName: \"kubernetes.io/projected/5b1b5e49-bdba-4463-b025-fc30bedd63f0-kube-api-access-wg228\") pod \"glance-db-sync-5vzbb\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.554782 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-utilities\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.554901 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srb2\" (UniqueName: \"kubernetes.io/projected/bb011d49-3a61-484d-8407-5e31513a3fea-kube-api-access-2srb2\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.554936 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-catalog-content\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.582988 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vzbb" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.655907 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2srb2\" (UniqueName: \"kubernetes.io/projected/bb011d49-3a61-484d-8407-5e31513a3fea-kube-api-access-2srb2\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.655962 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-catalog-content\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.656074 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-utilities\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.656580 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-catalog-content\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.656629 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-utilities\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.679571 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srb2\" (UniqueName: \"kubernetes.io/projected/bb011d49-3a61-484d-8407-5e31513a3fea-kube-api-access-2srb2\") pod \"community-operators-r4l24\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:41 crc kubenswrapper[4606]: I0123 17:08:41.745764 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.230588 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.374185 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ed41886-68c5-48a1-971f-279081891310-operator-scripts\") pod \"4ed41886-68c5-48a1-971f-279081891310\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.374265 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhzf6\" (UniqueName: \"kubernetes.io/projected/4ed41886-68c5-48a1-971f-279081891310-kube-api-access-rhzf6\") pod \"4ed41886-68c5-48a1-971f-279081891310\" (UID: \"4ed41886-68c5-48a1-971f-279081891310\") " Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.377114 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ed41886-68c5-48a1-971f-279081891310-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ed41886-68c5-48a1-971f-279081891310" (UID: "4ed41886-68c5-48a1-971f-279081891310"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.381096 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed41886-68c5-48a1-971f-279081891310-kube-api-access-rhzf6" (OuterVolumeSpecName: "kube-api-access-rhzf6") pod "4ed41886-68c5-48a1-971f-279081891310" (UID: "4ed41886-68c5-48a1-971f-279081891310"). InnerVolumeSpecName "kube-api-access-rhzf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.409194 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xksgt" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.409632 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xksgt" event={"ID":"4ed41886-68c5-48a1-971f-279081891310","Type":"ContainerDied","Data":"dddc5128171ced4919582c9ec6c637a08b8883edc5b8dbe221505ba7491a1f50"} Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.409655 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dddc5128171ced4919582c9ec6c637a08b8883edc5b8dbe221505ba7491a1f50" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.437292 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4l24"] Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.477709 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ed41886-68c5-48a1-971f-279081891310-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.477752 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhzf6\" (UniqueName: \"kubernetes.io/projected/4ed41886-68c5-48a1-971f-279081891310-kube-api-access-rhzf6\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:42 crc kubenswrapper[4606]: I0123 17:08:42.647840 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5vzbb"] Jan 23 17:08:42 crc kubenswrapper[4606]: W0123 17:08:42.649778 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b1b5e49_bdba_4463_b025_fc30bedd63f0.slice/crio-da69dd14d0f169169c725c0520effce698bd3506f934a6025cc850139325310f WatchSource:0}: Error finding container da69dd14d0f169169c725c0520effce698bd3506f934a6025cc850139325310f: Status 404 returned error can't find the container with id da69dd14d0f169169c725c0520effce698bd3506f934a6025cc850139325310f Jan 23 17:08:43 crc kubenswrapper[4606]: I0123 17:08:43.415612 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l24" event={"ID":"bb011d49-3a61-484d-8407-5e31513a3fea","Type":"ContainerStarted","Data":"672dd5ab7793df9d532bae001bf4585deacf73ef0494a1757f8a04638b50f537"} Jan 23 17:08:43 crc kubenswrapper[4606]: I0123 17:08:43.416411 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vzbb" event={"ID":"5b1b5e49-bdba-4463-b025-fc30bedd63f0","Type":"ContainerStarted","Data":"da69dd14d0f169169c725c0520effce698bd3506f934a6025cc850139325310f"} Jan 23 17:08:43 crc kubenswrapper[4606]: I0123 17:08:43.418382 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerStarted","Data":"ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3"} Jan 23 17:08:43 crc kubenswrapper[4606]: I0123 17:08:43.438513 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z5v24" podStartSLOduration=8.737849479 podStartE2EDuration="12.438494588s" podCreationTimestamp="2026-01-23 17:08:31 +0000 UTC" firstStartedPulling="2026-01-23 17:08:38.32307259 +0000 UTC m=+952.323265006" lastFinishedPulling="2026-01-23 17:08:42.023717709 +0000 UTC m=+956.023910115" observedRunningTime="2026-01-23 17:08:43.436347525 +0000 UTC m=+957.436539931" watchObservedRunningTime="2026-01-23 17:08:43.438494588 +0000 UTC m=+957.438686994" Jan 23 17:08:43 crc kubenswrapper[4606]: I0123 17:08:43.943079 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xksgt"] Jan 23 17:08:43 crc kubenswrapper[4606]: I0123 17:08:43.951829 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xksgt"] Jan 23 17:08:44 crc kubenswrapper[4606]: I0123 17:08:44.841674 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed41886-68c5-48a1-971f-279081891310" path="/var/lib/kubelet/pods/4ed41886-68c5-48a1-971f-279081891310/volumes" Jan 23 17:08:45 crc kubenswrapper[4606]: I0123 17:08:45.845975 4606 generic.go:334] "Generic (PLEG): container finished" podID="bb011d49-3a61-484d-8407-5e31513a3fea" containerID="1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd" exitCode=0 Jan 23 17:08:45 crc kubenswrapper[4606]: I0123 17:08:45.846016 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l24" event={"ID":"bb011d49-3a61-484d-8407-5e31513a3fea","Type":"ContainerDied","Data":"1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd"} Jan 23 17:08:46 crc kubenswrapper[4606]: I0123 17:08:46.876462 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l24" event={"ID":"bb011d49-3a61-484d-8407-5e31513a3fea","Type":"ContainerStarted","Data":"a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7"} Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.439929 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.530501 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.706391 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-w4n8c"] Jan 23 17:08:47 crc kubenswrapper[4606]: E0123 17:08:47.706818 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed41886-68c5-48a1-971f-279081891310" containerName="mariadb-account-create-update" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.706840 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed41886-68c5-48a1-971f-279081891310" containerName="mariadb-account-create-update" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.707051 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed41886-68c5-48a1-971f-279081891310" containerName="mariadb-account-create-update" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.708921 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.711977 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.732204 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w4n8c"] Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.772909 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c3dc1f-51b3-485e-8be3-35ce389029e9-operator-scripts\") pod \"root-account-create-update-w4n8c\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.772942 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kpk\" (UniqueName: \"kubernetes.io/projected/e7c3dc1f-51b3-485e-8be3-35ce389029e9-kube-api-access-w7kpk\") pod \"root-account-create-update-w4n8c\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.815800 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.874183 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c3dc1f-51b3-485e-8be3-35ce389029e9-operator-scripts\") pod \"root-account-create-update-w4n8c\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.874222 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7kpk\" (UniqueName: \"kubernetes.io/projected/e7c3dc1f-51b3-485e-8be3-35ce389029e9-kube-api-access-w7kpk\") pod \"root-account-create-update-w4n8c\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.876666 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c3dc1f-51b3-485e-8be3-35ce389029e9-operator-scripts\") pod \"root-account-create-update-w4n8c\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.896807 4606 generic.go:334] "Generic (PLEG): container finished" podID="bb011d49-3a61-484d-8407-5e31513a3fea" containerID="a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7" exitCode=0 Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.897742 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l24" event={"ID":"bb011d49-3a61-484d-8407-5e31513a3fea","Type":"ContainerDied","Data":"a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7"} Jan 23 17:08:47 crc kubenswrapper[4606]: I0123 17:08:47.909843 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7kpk\" (UniqueName: \"kubernetes.io/projected/e7c3dc1f-51b3-485e-8be3-35ce389029e9-kube-api-access-w7kpk\") pod \"root-account-create-update-w4n8c\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.032058 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.335732 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.418149 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kfwtp"] Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.418392 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" podUID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerName="dnsmasq-dns" containerID="cri-o://8c66fa3fce5ee2bffccc14a43bca4ba576c887cd2355d72cee0fa01e40ce5a96" gracePeriod=10 Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.588663 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w4n8c"] Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.906714 4606 generic.go:334] "Generic (PLEG): container finished" podID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerID="8c66fa3fce5ee2bffccc14a43bca4ba576c887cd2355d72cee0fa01e40ce5a96" exitCode=0 Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.907015 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" event={"ID":"fa0b5879-6896-403b-8565-7399b44e9ee2","Type":"ContainerDied","Data":"8c66fa3fce5ee2bffccc14a43bca4ba576c887cd2355d72cee0fa01e40ce5a96"} Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.908219 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w4n8c" event={"ID":"e7c3dc1f-51b3-485e-8be3-35ce389029e9","Type":"ContainerStarted","Data":"c8329878e1a94e0509704b2984f0fcf495c9f76aee2d2dae8a49da98fe654c1a"} Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.908239 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w4n8c" event={"ID":"e7c3dc1f-51b3-485e-8be3-35ce389029e9","Type":"ContainerStarted","Data":"207238cec62c8dd986776a017081f95c06a14e8e801fd85bd48cd05c559dee1b"} Jan 23 17:08:48 crc kubenswrapper[4606]: I0123 17:08:48.945634 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-w4n8c" podStartSLOduration=1.945467001 podStartE2EDuration="1.945467001s" podCreationTimestamp="2026-01-23 17:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:08:48.940650963 +0000 UTC m=+962.940843369" watchObservedRunningTime="2026-01-23 17:08:48.945467001 +0000 UTC m=+962.945659437" Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.420934 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4mzqq"] Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.421364 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4mzqq" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="registry-server" containerID="cri-o://9fa02fcb13bc86cf374223be1095f72aef2f05c0ff3add716e8038ce77bd7df5" gracePeriod=2 Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.722873 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.918580 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" event={"ID":"fa0b5879-6896-403b-8565-7399b44e9ee2","Type":"ContainerDied","Data":"9a246af0f77c2f090f68354b7d9dd96866858e7f6cbd092e0b46ef950a9f63e9"} Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.918631 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kfwtp" Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.918679 4606 scope.go:117] "RemoveContainer" containerID="8c66fa3fce5ee2bffccc14a43bca4ba576c887cd2355d72cee0fa01e40ce5a96" Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.919356 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-config\") pod \"fa0b5879-6896-403b-8565-7399b44e9ee2\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.919493 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc\") pod \"fa0b5879-6896-403b-8565-7399b44e9ee2\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.919565 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8clx\" (UniqueName: \"kubernetes.io/projected/fa0b5879-6896-403b-8565-7399b44e9ee2-kube-api-access-f8clx\") pod \"fa0b5879-6896-403b-8565-7399b44e9ee2\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.929365 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l24" event={"ID":"bb011d49-3a61-484d-8407-5e31513a3fea","Type":"ContainerStarted","Data":"9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf"} Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.938360 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0b5879-6896-403b-8565-7399b44e9ee2-kube-api-access-f8clx" (OuterVolumeSpecName: "kube-api-access-f8clx") pod "fa0b5879-6896-403b-8565-7399b44e9ee2" (UID: "fa0b5879-6896-403b-8565-7399b44e9ee2"). InnerVolumeSpecName "kube-api-access-f8clx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.944960 4606 generic.go:334] "Generic (PLEG): container finished" podID="e7c3dc1f-51b3-485e-8be3-35ce389029e9" containerID="c8329878e1a94e0509704b2984f0fcf495c9f76aee2d2dae8a49da98fe654c1a" exitCode=0 Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.945070 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w4n8c" event={"ID":"e7c3dc1f-51b3-485e-8be3-35ce389029e9","Type":"ContainerDied","Data":"c8329878e1a94e0509704b2984f0fcf495c9f76aee2d2dae8a49da98fe654c1a"} Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.955732 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r4l24" podStartSLOduration=5.967689204 podStartE2EDuration="8.955712306s" podCreationTimestamp="2026-01-23 17:08:41 +0000 UTC" firstStartedPulling="2026-01-23 17:08:45.847935261 +0000 UTC m=+959.848127667" lastFinishedPulling="2026-01-23 17:08:48.835958363 +0000 UTC m=+962.836150769" observedRunningTime="2026-01-23 17:08:49.951040482 +0000 UTC m=+963.951232908" watchObservedRunningTime="2026-01-23 17:08:49.955712306 +0000 UTC m=+963.955904712" Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.959206 4606 generic.go:334] "Generic (PLEG): container finished" podID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerID="9fa02fcb13bc86cf374223be1095f72aef2f05c0ff3add716e8038ce77bd7df5" exitCode=0 Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.959252 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mzqq" event={"ID":"bca3a129-8ee0-4470-a96b-6d4844a0d063","Type":"ContainerDied","Data":"9fa02fcb13bc86cf374223be1095f72aef2f05c0ff3add716e8038ce77bd7df5"} Jan 23 17:08:49 crc kubenswrapper[4606]: I0123 17:08:49.970789 4606 scope.go:117] "RemoveContainer" containerID="d87a3fdbebce3cf7c99dcea7c659025da4ba40f056f52986ddc22dd970fb9120" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.023756 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa0b5879-6896-403b-8565-7399b44e9ee2" (UID: "fa0b5879-6896-403b-8565-7399b44e9ee2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.024447 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc\") pod \"fa0b5879-6896-403b-8565-7399b44e9ee2\" (UID: \"fa0b5879-6896-403b-8565-7399b44e9ee2\") " Jan 23 17:08:50 crc kubenswrapper[4606]: W0123 17:08:50.024742 4606 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fa0b5879-6896-403b-8565-7399b44e9ee2/volumes/kubernetes.io~configmap/dns-svc Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.024948 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa0b5879-6896-403b-8565-7399b44e9ee2" (UID: "fa0b5879-6896-403b-8565-7399b44e9ee2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.024861 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8clx\" (UniqueName: \"kubernetes.io/projected/fa0b5879-6896-403b-8565-7399b44e9ee2-kube-api-access-f8clx\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.062002 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-config" (OuterVolumeSpecName: "config") pod "fa0b5879-6896-403b-8565-7399b44e9ee2" (UID: "fa0b5879-6896-403b-8565-7399b44e9ee2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.126362 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.126562 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0b5879-6896-403b-8565-7399b44e9ee2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.244627 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kfwtp"] Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.249654 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kfwtp"] Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.379625 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0b5879-6896-403b-8565-7399b44e9ee2" path="/var/lib/kubelet/pods/fa0b5879-6896-403b-8565-7399b44e9ee2/volumes" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.514341 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.639831 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-utilities\") pod \"bca3a129-8ee0-4470-a96b-6d4844a0d063\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.639898 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgkrg\" (UniqueName: \"kubernetes.io/projected/bca3a129-8ee0-4470-a96b-6d4844a0d063-kube-api-access-wgkrg\") pod \"bca3a129-8ee0-4470-a96b-6d4844a0d063\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.640021 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-catalog-content\") pod \"bca3a129-8ee0-4470-a96b-6d4844a0d063\" (UID: \"bca3a129-8ee0-4470-a96b-6d4844a0d063\") " Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.640690 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-utilities" (OuterVolumeSpecName: "utilities") pod "bca3a129-8ee0-4470-a96b-6d4844a0d063" (UID: "bca3a129-8ee0-4470-a96b-6d4844a0d063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.645496 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca3a129-8ee0-4470-a96b-6d4844a0d063-kube-api-access-wgkrg" (OuterVolumeSpecName: "kube-api-access-wgkrg") pod "bca3a129-8ee0-4470-a96b-6d4844a0d063" (UID: "bca3a129-8ee0-4470-a96b-6d4844a0d063"). InnerVolumeSpecName "kube-api-access-wgkrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.742909 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.743158 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgkrg\" (UniqueName: \"kubernetes.io/projected/bca3a129-8ee0-4470-a96b-6d4844a0d063-kube-api-access-wgkrg\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.787801 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bca3a129-8ee0-4470-a96b-6d4844a0d063" (UID: "bca3a129-8ee0-4470-a96b-6d4844a0d063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.845691 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca3a129-8ee0-4470-a96b-6d4844a0d063-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.972690 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4mzqq" event={"ID":"bca3a129-8ee0-4470-a96b-6d4844a0d063","Type":"ContainerDied","Data":"5e7368bd215f72334c157d221fa3134cd0fc9a300de2981fe8dc92dc58680aa6"} Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.972751 4606 scope.go:117] "RemoveContainer" containerID="9fa02fcb13bc86cf374223be1095f72aef2f05c0ff3add716e8038ce77bd7df5" Jan 23 17:08:50 crc kubenswrapper[4606]: I0123 17:08:50.972788 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4mzqq" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.007372 4606 scope.go:117] "RemoveContainer" containerID="4d9cef0aaaac2469acefaccff1968512013bb76d241430d13daf872b9a84e5b7" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.010579 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4mzqq"] Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.034126 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4mzqq"] Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.035306 4606 scope.go:117] "RemoveContainer" containerID="5b3427d4b655c9e959f6c6f7ae0b949c181423c1fa0731b549d51f876d2cd80e" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.380831 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.461024 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c3dc1f-51b3-485e-8be3-35ce389029e9-operator-scripts\") pod \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.461069 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7kpk\" (UniqueName: \"kubernetes.io/projected/e7c3dc1f-51b3-485e-8be3-35ce389029e9-kube-api-access-w7kpk\") pod \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\" (UID: \"e7c3dc1f-51b3-485e-8be3-35ce389029e9\") " Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.462234 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c3dc1f-51b3-485e-8be3-35ce389029e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7c3dc1f-51b3-485e-8be3-35ce389029e9" (UID: "e7c3dc1f-51b3-485e-8be3-35ce389029e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.465653 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c3dc1f-51b3-485e-8be3-35ce389029e9-kube-api-access-w7kpk" (OuterVolumeSpecName: "kube-api-access-w7kpk") pod "e7c3dc1f-51b3-485e-8be3-35ce389029e9" (UID: "e7c3dc1f-51b3-485e-8be3-35ce389029e9"). InnerVolumeSpecName "kube-api-access-w7kpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.563376 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7c3dc1f-51b3-485e-8be3-35ce389029e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.563414 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7kpk\" (UniqueName: \"kubernetes.io/projected/e7c3dc1f-51b3-485e-8be3-35ce389029e9-kube-api-access-w7kpk\") on node \"crc\" DevicePath \"\"" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.642502 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5sjkh" podUID="7063b991-5662-4335-9e9e-73f057367708" containerName="ovn-controller" probeResult="failure" output=< Jan 23 17:08:51 crc kubenswrapper[4606]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 17:08:51 crc kubenswrapper[4606]: > Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.746660 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.746720 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.964831 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.964878 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.988511 4606 generic.go:334] "Generic (PLEG): container finished" podID="f26de21c-1da5-42d4-9dcd-0e6415109f94" containerID="359d52a2fca1616da952813ece6f3fe4c9244ec7895f724563b3d52e6626e1f0" exitCode=0 Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.988611 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5h5kn" event={"ID":"f26de21c-1da5-42d4-9dcd-0e6415109f94","Type":"ContainerDied","Data":"359d52a2fca1616da952813ece6f3fe4c9244ec7895f724563b3d52e6626e1f0"} Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.990313 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w4n8c" event={"ID":"e7c3dc1f-51b3-485e-8be3-35ce389029e9","Type":"ContainerDied","Data":"207238cec62c8dd986776a017081f95c06a14e8e801fd85bd48cd05c559dee1b"} Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.990325 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w4n8c" Jan 23 17:08:51 crc kubenswrapper[4606]: I0123 17:08:51.990339 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="207238cec62c8dd986776a017081f95c06a14e8e801fd85bd48cd05c559dee1b" Jan 23 17:08:52 crc kubenswrapper[4606]: I0123 17:08:52.021681 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:52 crc kubenswrapper[4606]: I0123 17:08:52.075998 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:08:52 crc kubenswrapper[4606]: I0123 17:08:52.377736 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" path="/var/lib/kubelet/pods/bca3a129-8ee0-4470-a96b-6d4844a0d063/volumes" Jan 23 17:08:52 crc kubenswrapper[4606]: I0123 17:08:52.799160 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r4l24" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="registry-server" probeResult="failure" output=< Jan 23 17:08:52 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 17:08:52 crc kubenswrapper[4606]: > Jan 23 17:08:52 crc kubenswrapper[4606]: I0123 17:08:52.962751 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:08:52 crc kubenswrapper[4606]: I0123 17:08:52.963961 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tmz28" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.215513 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5sjkh-config-v4kxw"] Jan 23 17:08:53 crc kubenswrapper[4606]: E0123 17:08:53.215934 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="extract-utilities" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.215949 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="extract-utilities" Jan 23 17:08:53 crc kubenswrapper[4606]: E0123 17:08:53.215960 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="registry-server" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.215968 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="registry-server" Jan 23 17:08:53 crc kubenswrapper[4606]: E0123 17:08:53.215978 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="extract-content" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.215984 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="extract-content" Jan 23 17:08:53 crc kubenswrapper[4606]: E0123 17:08:53.215995 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerName="init" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.216000 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerName="init" Jan 23 17:08:53 crc kubenswrapper[4606]: E0123 17:08:53.216011 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c3dc1f-51b3-485e-8be3-35ce389029e9" containerName="mariadb-account-create-update" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.216017 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c3dc1f-51b3-485e-8be3-35ce389029e9" containerName="mariadb-account-create-update" Jan 23 17:08:53 crc kubenswrapper[4606]: E0123 17:08:53.216029 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerName="dnsmasq-dns" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.216034 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerName="dnsmasq-dns" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.216174 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0b5879-6896-403b-8565-7399b44e9ee2" containerName="dnsmasq-dns" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.216188 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c3dc1f-51b3-485e-8be3-35ce389029e9" containerName="mariadb-account-create-update" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.216199 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca3a129-8ee0-4470-a96b-6d4844a0d063" containerName="registry-server" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.216851 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.222071 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.226283 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5sjkh-config-v4kxw"] Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.309760 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.309809 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjs6b\" (UniqueName: \"kubernetes.io/projected/f88968ca-e167-49a5-91ed-ee4f091c38bb-kube-api-access-pjs6b\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.309943 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run-ovn\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.309973 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-scripts\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.309993 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-log-ovn\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.310031 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-additional-scripts\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.335906 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.416815 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run-ovn\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.416878 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-scripts\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.416904 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-log-ovn\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.416962 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-additional-scripts\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.417023 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.417046 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjs6b\" (UniqueName: \"kubernetes.io/projected/f88968ca-e167-49a5-91ed-ee4f091c38bb-kube-api-access-pjs6b\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.417613 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run-ovn\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.418274 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-log-ovn\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.421430 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-additional-scripts\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.422866 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.422951 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-scripts\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.437608 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjs6b\" (UniqueName: \"kubernetes.io/projected/f88968ca-e167-49a5-91ed-ee4f091c38bb-kube-api-access-pjs6b\") pod \"ovn-controller-5sjkh-config-v4kxw\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:53 crc kubenswrapper[4606]: I0123 17:08:53.560968 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:08:54 crc kubenswrapper[4606]: I0123 17:08:54.031565 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5v24"] Jan 23 17:08:54 crc kubenswrapper[4606]: I0123 17:08:54.032145 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z5v24" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="registry-server" containerID="cri-o://ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3" gracePeriod=2 Jan 23 17:08:54 crc kubenswrapper[4606]: I0123 17:08:54.902610 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-w4n8c"] Jan 23 17:08:54 crc kubenswrapper[4606]: I0123 17:08:54.912177 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-w4n8c"] Jan 23 17:08:55 crc kubenswrapper[4606]: I0123 17:08:55.027436 4606 generic.go:334] "Generic (PLEG): container finished" podID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerID="ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3" exitCode=0 Jan 23 17:08:55 crc kubenswrapper[4606]: I0123 17:08:55.027484 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerDied","Data":"ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3"} Jan 23 17:08:55 crc kubenswrapper[4606]: I0123 17:08:55.560959 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:55 crc kubenswrapper[4606]: I0123 17:08:55.577132 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/06f54b44-852a-4700-ac26-e9d89a545f28-etc-swift\") pod \"swift-storage-0\" (UID: \"06f54b44-852a-4700-ac26-e9d89a545f28\") " pod="openstack/swift-storage-0" Jan 23 17:08:55 crc kubenswrapper[4606]: I0123 17:08:55.805678 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 17:08:56 crc kubenswrapper[4606]: I0123 17:08:56.383912 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c3dc1f-51b3-485e-8be3-35ce389029e9" path="/var/lib/kubelet/pods/e7c3dc1f-51b3-485e-8be3-35ce389029e9/volumes" Jan 23 17:08:56 crc kubenswrapper[4606]: I0123 17:08:56.640453 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5sjkh" podUID="7063b991-5662-4335-9e9e-73f057367708" containerName="ovn-controller" probeResult="failure" output=< Jan 23 17:08:56 crc kubenswrapper[4606]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 17:08:56 crc kubenswrapper[4606]: > Jan 23 17:08:57 crc kubenswrapper[4606]: I0123 17:08:57.822229 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 17:08:57 crc kubenswrapper[4606]: I0123 17:08:57.973773 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.273316 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-72r74"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.274306 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.320048 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fa10-account-create-update-xmwcv"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.322105 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.351841 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.407648 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-72r74"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.418900 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa10-account-create-update-xmwcv"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.438276 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224fe191-a024-4e50-8a8a-8b46aef66847-operator-scripts\") pod \"barbican-fa10-account-create-update-xmwcv\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.438321 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e3ad36a-2eab-4985-beea-6483c3faae7d-operator-scripts\") pod \"barbican-db-create-72r74\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.438378 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kxz\" (UniqueName: \"kubernetes.io/projected/4e3ad36a-2eab-4985-beea-6483c3faae7d-kube-api-access-f5kxz\") pod \"barbican-db-create-72r74\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.438464 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccs9\" (UniqueName: \"kubernetes.io/projected/224fe191-a024-4e50-8a8a-8b46aef66847-kube-api-access-sccs9\") pod \"barbican-fa10-account-create-update-xmwcv\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.539516 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5kxz\" (UniqueName: \"kubernetes.io/projected/4e3ad36a-2eab-4985-beea-6483c3faae7d-kube-api-access-f5kxz\") pod \"barbican-db-create-72r74\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.539900 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccs9\" (UniqueName: \"kubernetes.io/projected/224fe191-a024-4e50-8a8a-8b46aef66847-kube-api-access-sccs9\") pod \"barbican-fa10-account-create-update-xmwcv\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.540041 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224fe191-a024-4e50-8a8a-8b46aef66847-operator-scripts\") pod \"barbican-fa10-account-create-update-xmwcv\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.540120 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e3ad36a-2eab-4985-beea-6483c3faae7d-operator-scripts\") pod \"barbican-db-create-72r74\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.541371 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e3ad36a-2eab-4985-beea-6483c3faae7d-operator-scripts\") pod \"barbican-db-create-72r74\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.543970 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224fe191-a024-4e50-8a8a-8b46aef66847-operator-scripts\") pod \"barbican-fa10-account-create-update-xmwcv\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.556531 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2hhhx"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.557753 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.597316 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2hhhx"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.614306 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5kxz\" (UniqueName: \"kubernetes.io/projected/4e3ad36a-2eab-4985-beea-6483c3faae7d-kube-api-access-f5kxz\") pod \"barbican-db-create-72r74\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.646498 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccs9\" (UniqueName: \"kubernetes.io/projected/224fe191-a024-4e50-8a8a-8b46aef66847-kube-api-access-sccs9\") pod \"barbican-fa10-account-create-update-xmwcv\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.686920 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a05b-account-create-update-s2mk9"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.688161 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.692881 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.709420 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a05b-account-create-update-s2mk9"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.743728 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvmw\" (UniqueName: \"kubernetes.io/projected/414e99fb-9e21-431e-8296-e4f89723f459-kube-api-access-8qvmw\") pod \"cinder-db-create-2hhhx\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.743800 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414e99fb-9e21-431e-8296-e4f89723f459-operator-scripts\") pod \"cinder-db-create-2hhhx\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.846444 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9r9\" (UniqueName: \"kubernetes.io/projected/b67c3811-645f-4d09-b577-56a82395ec4c-kube-api-access-ph9r9\") pod \"cinder-a05b-account-create-update-s2mk9\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.846510 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67c3811-645f-4d09-b577-56a82395ec4c-operator-scripts\") pod \"cinder-a05b-account-create-update-s2mk9\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.846548 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qvmw\" (UniqueName: \"kubernetes.io/projected/414e99fb-9e21-431e-8296-e4f89723f459-kube-api-access-8qvmw\") pod \"cinder-db-create-2hhhx\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.846609 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414e99fb-9e21-431e-8296-e4f89723f459-operator-scripts\") pod \"cinder-db-create-2hhhx\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.847464 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414e99fb-9e21-431e-8296-e4f89723f459-operator-scripts\") pod \"cinder-db-create-2hhhx\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.879268 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qvmw\" (UniqueName: \"kubernetes.io/projected/414e99fb-9e21-431e-8296-e4f89723f459-kube-api-access-8qvmw\") pod \"cinder-db-create-2hhhx\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.915959 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-72r74" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.927782 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mp29k"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.928706 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.937601 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.947757 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67c3811-645f-4d09-b577-56a82395ec4c-operator-scripts\") pod \"cinder-a05b-account-create-update-s2mk9\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.947930 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph9r9\" (UniqueName: \"kubernetes.io/projected/b67c3811-645f-4d09-b577-56a82395ec4c-kube-api-access-ph9r9\") pod \"cinder-a05b-account-create-update-s2mk9\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.948946 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67c3811-645f-4d09-b577-56a82395ec4c-operator-scripts\") pod \"cinder-a05b-account-create-update-s2mk9\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.953653 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qhnq4"] Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.955098 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.958855 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2hhhx" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.964096 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.964306 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.964402 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 17:08:58 crc kubenswrapper[4606]: I0123 17:08:58.964519 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cxp6x" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:58.999220 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mp29k"] Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.007249 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhnq4"] Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.022320 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph9r9\" (UniqueName: \"kubernetes.io/projected/b67c3811-645f-4d09-b577-56a82395ec4c-kube-api-access-ph9r9\") pod \"cinder-a05b-account-create-update-s2mk9\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.034426 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.050401 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhwcs\" (UniqueName: \"kubernetes.io/projected/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-kube-api-access-hhwcs\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.050513 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6gc5\" (UniqueName: \"kubernetes.io/projected/f79e7e45-faf1-4f97-a135-ce69d2bb7529-kube-api-access-x6gc5\") pod \"neutron-db-create-mp29k\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.050624 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-config-data\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.050664 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-combined-ca-bundle\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.050692 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79e7e45-faf1-4f97-a135-ce69d2bb7529-operator-scripts\") pod \"neutron-db-create-mp29k\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.124383 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-979e-account-create-update-xlx74"] Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.125640 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.129748 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.135687 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-979e-account-create-update-xlx74"] Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.157103 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-config-data\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.157157 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-combined-ca-bundle\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.157187 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79e7e45-faf1-4f97-a135-ce69d2bb7529-operator-scripts\") pod \"neutron-db-create-mp29k\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.157331 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhwcs\" (UniqueName: \"kubernetes.io/projected/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-kube-api-access-hhwcs\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.157424 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6gc5\" (UniqueName: \"kubernetes.io/projected/f79e7e45-faf1-4f97-a135-ce69d2bb7529-kube-api-access-x6gc5\") pod \"neutron-db-create-mp29k\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.161782 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79e7e45-faf1-4f97-a135-ce69d2bb7529-operator-scripts\") pod \"neutron-db-create-mp29k\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.184930 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-combined-ca-bundle\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.188362 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-config-data\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.194494 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhwcs\" (UniqueName: \"kubernetes.io/projected/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-kube-api-access-hhwcs\") pod \"keystone-db-sync-qhnq4\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.202075 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6gc5\" (UniqueName: \"kubernetes.io/projected/f79e7e45-faf1-4f97-a135-ce69d2bb7529-kube-api-access-x6gc5\") pod \"neutron-db-create-mp29k\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.258751 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfw7\" (UniqueName: \"kubernetes.io/projected/c6039461-4f70-4ddd-992f-778a1e7e6fb8-kube-api-access-nqfw7\") pod \"neutron-979e-account-create-update-xlx74\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.258788 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp29k" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.259032 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6039461-4f70-4ddd-992f-778a1e7e6fb8-operator-scripts\") pod \"neutron-979e-account-create-update-xlx74\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.291603 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.360067 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6039461-4f70-4ddd-992f-778a1e7e6fb8-operator-scripts\") pod \"neutron-979e-account-create-update-xlx74\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.360148 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfw7\" (UniqueName: \"kubernetes.io/projected/c6039461-4f70-4ddd-992f-778a1e7e6fb8-kube-api-access-nqfw7\") pod \"neutron-979e-account-create-update-xlx74\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.363122 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6039461-4f70-4ddd-992f-778a1e7e6fb8-operator-scripts\") pod \"neutron-979e-account-create-update-xlx74\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.388918 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfw7\" (UniqueName: \"kubernetes.io/projected/c6039461-4f70-4ddd-992f-778a1e7e6fb8-kube-api-access-nqfw7\") pod \"neutron-979e-account-create-update-xlx74\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.461360 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.951336 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zhbb7"] Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.952796 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zhbb7" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.967435 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 17:08:59 crc kubenswrapper[4606]: I0123 17:08:59.977021 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zhbb7"] Jan 23 17:09:00 crc kubenswrapper[4606]: I0123 17:09:00.070365 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfds\" (UniqueName: \"kubernetes.io/projected/d613fd23-da43-482f-945b-e72c58d9b691-kube-api-access-ccfds\") pod \"root-account-create-update-zhbb7\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:00 crc kubenswrapper[4606]: I0123 17:09:00.070426 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613fd23-da43-482f-945b-e72c58d9b691-operator-scripts\") pod \"root-account-create-update-zhbb7\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:00 crc kubenswrapper[4606]: I0123 17:09:00.173958 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccfds\" (UniqueName: \"kubernetes.io/projected/d613fd23-da43-482f-945b-e72c58d9b691-kube-api-access-ccfds\") pod \"root-account-create-update-zhbb7\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:00 crc kubenswrapper[4606]: I0123 17:09:00.174036 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613fd23-da43-482f-945b-e72c58d9b691-operator-scripts\") pod \"root-account-create-update-zhbb7\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:00 crc kubenswrapper[4606]: I0123 17:09:00.174985 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613fd23-da43-482f-945b-e72c58d9b691-operator-scripts\") pod \"root-account-create-update-zhbb7\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:00 crc kubenswrapper[4606]: I0123 17:09:00.204479 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccfds\" (UniqueName: \"kubernetes.io/projected/d613fd23-da43-482f-945b-e72c58d9b691-kube-api-access-ccfds\") pod \"root-account-create-update-zhbb7\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:00 crc kubenswrapper[4606]: I0123 17:09:00.267039 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:01 crc kubenswrapper[4606]: I0123 17:09:01.707088 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5sjkh" podUID="7063b991-5662-4335-9e9e-73f057367708" containerName="ovn-controller" probeResult="failure" output=< Jan 23 17:09:01 crc kubenswrapper[4606]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 17:09:01 crc kubenswrapper[4606]: > Jan 23 17:09:01 crc kubenswrapper[4606]: I0123 17:09:01.799142 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:09:01 crc kubenswrapper[4606]: I0123 17:09:01.918294 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:09:01 crc kubenswrapper[4606]: E0123 17:09:01.987826 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3 is running failed: container process not found" containerID="ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:01 crc kubenswrapper[4606]: E0123 17:09:01.988610 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3 is running failed: container process not found" containerID="ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:01 crc kubenswrapper[4606]: E0123 17:09:01.989084 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3 is running failed: container process not found" containerID="ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:01 crc kubenswrapper[4606]: E0123 17:09:01.989178 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-z5v24" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="registry-server" Jan 23 17:09:02 crc kubenswrapper[4606]: E0123 17:09:02.801649 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 23 17:09:02 crc kubenswrapper[4606]: E0123 17:09:02.801907 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wg228,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-5vzbb_openstack(5b1b5e49-bdba-4463-b025-fc30bedd63f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:09:02 crc kubenswrapper[4606]: E0123 17:09:02.803311 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-5vzbb" podUID="5b1b5e49-bdba-4463-b025-fc30bedd63f0" Jan 23 17:09:02 crc kubenswrapper[4606]: I0123 17:09:02.885988 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4l24"] Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.154777 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5h5kn" event={"ID":"f26de21c-1da5-42d4-9dcd-0e6415109f94","Type":"ContainerDied","Data":"bc67454d5e888972dc252cfbc8299046125fa1f525a98c7c21ff40a1c5cb6c8f"} Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.154996 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc67454d5e888972dc252cfbc8299046125fa1f525a98c7c21ff40a1c5cb6c8f" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.155128 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r4l24" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="registry-server" containerID="cri-o://9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf" gracePeriod=2 Jan 23 17:09:03 crc kubenswrapper[4606]: E0123 17:09:03.155668 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-5vzbb" podUID="5b1b5e49-bdba-4463-b025-fc30bedd63f0" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.188048 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.318047 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.341421 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-combined-ca-bundle\") pod \"f26de21c-1da5-42d4-9dcd-0e6415109f94\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.342901 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gclkv\" (UniqueName: \"kubernetes.io/projected/f26de21c-1da5-42d4-9dcd-0e6415109f94-kube-api-access-gclkv\") pod \"f26de21c-1da5-42d4-9dcd-0e6415109f94\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.343113 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-scripts\") pod \"f26de21c-1da5-42d4-9dcd-0e6415109f94\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.343324 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f26de21c-1da5-42d4-9dcd-0e6415109f94-etc-swift\") pod \"f26de21c-1da5-42d4-9dcd-0e6415109f94\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.343483 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-ring-data-devices\") pod \"f26de21c-1da5-42d4-9dcd-0e6415109f94\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.343680 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-swiftconf\") pod \"f26de21c-1da5-42d4-9dcd-0e6415109f94\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.344997 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f26de21c-1da5-42d4-9dcd-0e6415109f94" (UID: "f26de21c-1da5-42d4-9dcd-0e6415109f94"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.345283 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f26de21c-1da5-42d4-9dcd-0e6415109f94-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f26de21c-1da5-42d4-9dcd-0e6415109f94" (UID: "f26de21c-1da5-42d4-9dcd-0e6415109f94"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.345748 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-dispersionconf\") pod \"f26de21c-1da5-42d4-9dcd-0e6415109f94\" (UID: \"f26de21c-1da5-42d4-9dcd-0e6415109f94\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.353241 4606 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f26de21c-1da5-42d4-9dcd-0e6415109f94-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.353565 4606 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.367333 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26de21c-1da5-42d4-9dcd-0e6415109f94-kube-api-access-gclkv" (OuterVolumeSpecName: "kube-api-access-gclkv") pod "f26de21c-1da5-42d4-9dcd-0e6415109f94" (UID: "f26de21c-1da5-42d4-9dcd-0e6415109f94"). InnerVolumeSpecName "kube-api-access-gclkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.395916 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f26de21c-1da5-42d4-9dcd-0e6415109f94" (UID: "f26de21c-1da5-42d4-9dcd-0e6415109f94"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.397334 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-scripts" (OuterVolumeSpecName: "scripts") pod "f26de21c-1da5-42d4-9dcd-0e6415109f94" (UID: "f26de21c-1da5-42d4-9dcd-0e6415109f94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.399550 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26de21c-1da5-42d4-9dcd-0e6415109f94" (UID: "f26de21c-1da5-42d4-9dcd-0e6415109f94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.412093 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f26de21c-1da5-42d4-9dcd-0e6415109f94" (UID: "f26de21c-1da5-42d4-9dcd-0e6415109f94"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.454488 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-utilities\") pod \"87401ebd-0396-4ae2-9231-b2dbb28e308e\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.454602 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-catalog-content\") pod \"87401ebd-0396-4ae2-9231-b2dbb28e308e\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.454765 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnj2q\" (UniqueName: \"kubernetes.io/projected/87401ebd-0396-4ae2-9231-b2dbb28e308e-kube-api-access-mnj2q\") pod \"87401ebd-0396-4ae2-9231-b2dbb28e308e\" (UID: \"87401ebd-0396-4ae2-9231-b2dbb28e308e\") " Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.455277 4606 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.455294 4606 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.455308 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26de21c-1da5-42d4-9dcd-0e6415109f94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.455320 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gclkv\" (UniqueName: \"kubernetes.io/projected/f26de21c-1da5-42d4-9dcd-0e6415109f94-kube-api-access-gclkv\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.455332 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f26de21c-1da5-42d4-9dcd-0e6415109f94-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.457441 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-utilities" (OuterVolumeSpecName: "utilities") pod "87401ebd-0396-4ae2-9231-b2dbb28e308e" (UID: "87401ebd-0396-4ae2-9231-b2dbb28e308e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.460528 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87401ebd-0396-4ae2-9231-b2dbb28e308e-kube-api-access-mnj2q" (OuterVolumeSpecName: "kube-api-access-mnj2q") pod "87401ebd-0396-4ae2-9231-b2dbb28e308e" (UID: "87401ebd-0396-4ae2-9231-b2dbb28e308e"). InnerVolumeSpecName "kube-api-access-mnj2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.547517 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87401ebd-0396-4ae2-9231-b2dbb28e308e" (UID: "87401ebd-0396-4ae2-9231-b2dbb28e308e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.560058 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnj2q\" (UniqueName: \"kubernetes.io/projected/87401ebd-0396-4ae2-9231-b2dbb28e308e-kube-api-access-mnj2q\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.560098 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.560116 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87401ebd-0396-4ae2-9231-b2dbb28e308e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.830081 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mp29k"] Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.914434 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qhnq4"] Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.947801 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a05b-account-create-update-s2mk9"] Jan 23 17:09:03 crc kubenswrapper[4606]: W0123 17:09:03.947799 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb67c3811_645f_4d09_b577_56a82395ec4c.slice/crio-96326b7861378800fe17d13973ecbc32d37a7a6bef1937b0d3ba5554767bf326 WatchSource:0}: Error finding container 96326b7861378800fe17d13973ecbc32d37a7a6bef1937b0d3ba5554767bf326: Status 404 returned error can't find the container with id 96326b7861378800fe17d13973ecbc32d37a7a6bef1937b0d3ba5554767bf326 Jan 23 17:09:03 crc kubenswrapper[4606]: I0123 17:09:03.957564 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5sjkh-config-v4kxw"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.140929 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.180323 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhnq4" event={"ID":"21c1c7ff-8437-4c18-ab38-7b6908fbe35c","Type":"ContainerStarted","Data":"77b27e905f3761d3ccf505e5516bfe2fd040b0682339e2954100b41ed06ecc7e"} Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.181680 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mp29k" event={"ID":"f79e7e45-faf1-4f97-a135-ce69d2bb7529","Type":"ContainerStarted","Data":"10d1c06d413dd8152210e7712d1f6b9d4232263a1f5bf1b5ac82f6214025e63b"} Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.182870 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5sjkh-config-v4kxw" event={"ID":"f88968ca-e167-49a5-91ed-ee4f091c38bb","Type":"ContainerStarted","Data":"765d8f060d5796a01fcdfd259633d779976facefa22b61e92bfd40e2e330dc91"} Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.185763 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a05b-account-create-update-s2mk9" event={"ID":"b67c3811-645f-4d09-b577-56a82395ec4c","Type":"ContainerStarted","Data":"96326b7861378800fe17d13973ecbc32d37a7a6bef1937b0d3ba5554767bf326"} Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.190906 4606 generic.go:334] "Generic (PLEG): container finished" podID="bb011d49-3a61-484d-8407-5e31513a3fea" containerID="9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf" exitCode=0 Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.190964 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l24" event={"ID":"bb011d49-3a61-484d-8407-5e31513a3fea","Type":"ContainerDied","Data":"9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf"} Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.190989 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4l24" event={"ID":"bb011d49-3a61-484d-8407-5e31513a3fea","Type":"ContainerDied","Data":"672dd5ab7793df9d532bae001bf4585deacf73ef0494a1757f8a04638b50f537"} Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.191004 4606 scope.go:117] "RemoveContainer" containerID="9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.191124 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4l24" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.195788 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5h5kn" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.199225 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5v24" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.199241 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5v24" event={"ID":"87401ebd-0396-4ae2-9231-b2dbb28e308e","Type":"ContainerDied","Data":"63923ac91c7ce34130e726f6f7045ba25bd609d73232d813db91db8e07f72244"} Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.216313 4606 scope.go:117] "RemoveContainer" containerID="a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.251086 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5v24"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.259159 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z5v24"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.284276 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2srb2\" (UniqueName: \"kubernetes.io/projected/bb011d49-3a61-484d-8407-5e31513a3fea-kube-api-access-2srb2\") pod \"bb011d49-3a61-484d-8407-5e31513a3fea\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.284395 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-utilities\") pod \"bb011d49-3a61-484d-8407-5e31513a3fea\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.284613 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-catalog-content\") pod \"bb011d49-3a61-484d-8407-5e31513a3fea\" (UID: \"bb011d49-3a61-484d-8407-5e31513a3fea\") " Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.287957 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-utilities" (OuterVolumeSpecName: "utilities") pod "bb011d49-3a61-484d-8407-5e31513a3fea" (UID: "bb011d49-3a61-484d-8407-5e31513a3fea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.288533 4606 scope.go:117] "RemoveContainer" containerID="1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd" Jan 23 17:09:04 crc kubenswrapper[4606]: W0123 17:09:04.317309 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6039461_4f70_4ddd_992f_778a1e7e6fb8.slice/crio-3536d90ebb389a6eb07fa0924b1a0bbb6c5f76dadbc383cf6b3974603c0a9ac2 WatchSource:0}: Error finding container 3536d90ebb389a6eb07fa0924b1a0bbb6c5f76dadbc383cf6b3974603c0a9ac2: Status 404 returned error can't find the container with id 3536d90ebb389a6eb07fa0924b1a0bbb6c5f76dadbc383cf6b3974603c0a9ac2 Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.331799 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-979e-account-create-update-xlx74"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.358140 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb011d49-3a61-484d-8407-5e31513a3fea-kube-api-access-2srb2" (OuterVolumeSpecName: "kube-api-access-2srb2") pod "bb011d49-3a61-484d-8407-5e31513a3fea" (UID: "bb011d49-3a61-484d-8407-5e31513a3fea"). InnerVolumeSpecName "kube-api-access-2srb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.360986 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2hhhx"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.386916 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2srb2\" (UniqueName: \"kubernetes.io/projected/bb011d49-3a61-484d-8407-5e31513a3fea-kube-api-access-2srb2\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.387243 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.395212 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" path="/var/lib/kubelet/pods/87401ebd-0396-4ae2-9231-b2dbb28e308e/volumes" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.399779 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zhbb7"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.399819 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-72r74"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.401609 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb011d49-3a61-484d-8407-5e31513a3fea" (UID: "bb011d49-3a61-484d-8407-5e31513a3fea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.403109 4606 scope.go:117] "RemoveContainer" containerID="9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf" Jan 23 17:09:04 crc kubenswrapper[4606]: E0123 17:09:04.406489 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf\": container with ID starting with 9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf not found: ID does not exist" containerID="9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.406635 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf"} err="failed to get container status \"9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf\": rpc error: code = NotFound desc = could not find container \"9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf\": container with ID starting with 9b16dad234518e6801d36320cc85c951c0d38d0fd83855d2de4e5bea03eec0cf not found: ID does not exist" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.406729 4606 scope.go:117] "RemoveContainer" containerID="a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7" Jan 23 17:09:04 crc kubenswrapper[4606]: E0123 17:09:04.407139 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7\": container with ID starting with a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7 not found: ID does not exist" containerID="a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.407171 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7"} err="failed to get container status \"a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7\": rpc error: code = NotFound desc = could not find container \"a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7\": container with ID starting with a27f8a174d97db173f73a067cfa9a99f3b646848c497e48ee059391b0deb60f7 not found: ID does not exist" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.407189 4606 scope.go:117] "RemoveContainer" containerID="1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd" Jan 23 17:09:04 crc kubenswrapper[4606]: E0123 17:09:04.407480 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd\": container with ID starting with 1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd not found: ID does not exist" containerID="1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.407565 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd"} err="failed to get container status \"1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd\": rpc error: code = NotFound desc = could not find container \"1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd\": container with ID starting with 1bcc8f793115f731b5472b41b812d22d554286c0624128e5714d45c8fe53e6dd not found: ID does not exist" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.407697 4606 scope.go:117] "RemoveContainer" containerID="ace9ecaf7977622c67ccdc33d89672abc16e7b7b4b1bfaf608e6ca39d70fb9e3" Jan 23 17:09:04 crc kubenswrapper[4606]: W0123 17:09:04.417317 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod224fe191_a024_4e50_8a8a_8b46aef66847.slice/crio-86d63d673c66450432226d806446d90e09d33969562d98815203959b2051080f WatchSource:0}: Error finding container 86d63d673c66450432226d806446d90e09d33969562d98815203959b2051080f: Status 404 returned error can't find the container with id 86d63d673c66450432226d806446d90e09d33969562d98815203959b2051080f Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.424441 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fa10-account-create-update-xmwcv"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.444481 4606 scope.go:117] "RemoveContainer" containerID="c4c1c37d6c5caff466e784866d9f43e3b95a48ec083ddc2d0e910d42b3c9f9b8" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.495815 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb011d49-3a61-484d-8407-5e31513a3fea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.511356 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.513904 4606 scope.go:117] "RemoveContainer" containerID="c7714d8c96081fabb3e6bd24ca88749febf5c2777e2b5c604fc2715501708804" Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.554926 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4l24"] Jan 23 17:09:04 crc kubenswrapper[4606]: I0123 17:09:04.570832 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r4l24"] Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.208219 4606 generic.go:334] "Generic (PLEG): container finished" podID="d613fd23-da43-482f-945b-e72c58d9b691" containerID="7b6728a5d45196df7500a0fec2d6ea8cb55ab4fc05dfcb2a8369dfdde6af8743" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.208271 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zhbb7" event={"ID":"d613fd23-da43-482f-945b-e72c58d9b691","Type":"ContainerDied","Data":"7b6728a5d45196df7500a0fec2d6ea8cb55ab4fc05dfcb2a8369dfdde6af8743"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.208542 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zhbb7" event={"ID":"d613fd23-da43-482f-945b-e72c58d9b691","Type":"ContainerStarted","Data":"291eddcb13565e4994e17ca5292684668f8185a0926b18b4222ad7c55fbbd3c4"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.216144 4606 generic.go:334] "Generic (PLEG): container finished" podID="b67c3811-645f-4d09-b577-56a82395ec4c" containerID="bf4537c8d80fc80349cb42fba3a80597830cf9272e8d6ccad67ac6585ff13c8e" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.216220 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a05b-account-create-update-s2mk9" event={"ID":"b67c3811-645f-4d09-b577-56a82395ec4c","Type":"ContainerDied","Data":"bf4537c8d80fc80349cb42fba3a80597830cf9272e8d6ccad67ac6585ff13c8e"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.217685 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"0de2f5d0d9d8f5bc5a6fc363c6a7018c789d9a6bba05e6b841fb8381d4bbffb8"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.219534 4606 generic.go:334] "Generic (PLEG): container finished" podID="c6039461-4f70-4ddd-992f-778a1e7e6fb8" containerID="724b55a2f44e36d87e571bb11a83d51f056013d9dbca6eedee2c0b16cf4e41da" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.219612 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979e-account-create-update-xlx74" event={"ID":"c6039461-4f70-4ddd-992f-778a1e7e6fb8","Type":"ContainerDied","Data":"724b55a2f44e36d87e571bb11a83d51f056013d9dbca6eedee2c0b16cf4e41da"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.219640 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979e-account-create-update-xlx74" event={"ID":"c6039461-4f70-4ddd-992f-778a1e7e6fb8","Type":"ContainerStarted","Data":"3536d90ebb389a6eb07fa0924b1a0bbb6c5f76dadbc383cf6b3974603c0a9ac2"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.244428 4606 generic.go:334] "Generic (PLEG): container finished" podID="414e99fb-9e21-431e-8296-e4f89723f459" containerID="4976c32fda720e49b1364a258f2e7fb82b748a5315e6a3549f23d8ac3a6333a2" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.244760 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2hhhx" event={"ID":"414e99fb-9e21-431e-8296-e4f89723f459","Type":"ContainerDied","Data":"4976c32fda720e49b1364a258f2e7fb82b748a5315e6a3549f23d8ac3a6333a2"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.244788 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2hhhx" event={"ID":"414e99fb-9e21-431e-8296-e4f89723f459","Type":"ContainerStarted","Data":"bc471f9da844b03367a1b17f986af2a0bfe7b4f16cc9826603f7ee0038015e1e"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.267526 4606 generic.go:334] "Generic (PLEG): container finished" podID="224fe191-a024-4e50-8a8a-8b46aef66847" containerID="02b7431c789b147be7ec6dfadae75a2b1b3016107f2a2b2dc530d465d61f68ef" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.267602 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa10-account-create-update-xmwcv" event={"ID":"224fe191-a024-4e50-8a8a-8b46aef66847","Type":"ContainerDied","Data":"02b7431c789b147be7ec6dfadae75a2b1b3016107f2a2b2dc530d465d61f68ef"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.267738 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa10-account-create-update-xmwcv" event={"ID":"224fe191-a024-4e50-8a8a-8b46aef66847","Type":"ContainerStarted","Data":"86d63d673c66450432226d806446d90e09d33969562d98815203959b2051080f"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.270380 4606 generic.go:334] "Generic (PLEG): container finished" podID="4e3ad36a-2eab-4985-beea-6483c3faae7d" containerID="559f2bb5f722470fe194d641485850589f46a17a6cdcea4eecb03e939cbc8c97" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.270444 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-72r74" event={"ID":"4e3ad36a-2eab-4985-beea-6483c3faae7d","Type":"ContainerDied","Data":"559f2bb5f722470fe194d641485850589f46a17a6cdcea4eecb03e939cbc8c97"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.270464 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-72r74" event={"ID":"4e3ad36a-2eab-4985-beea-6483c3faae7d","Type":"ContainerStarted","Data":"497ff07d122774d89969b324c431da7073ead65ad381b91437b24735c8df2241"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.273554 4606 generic.go:334] "Generic (PLEG): container finished" podID="f79e7e45-faf1-4f97-a135-ce69d2bb7529" containerID="8a79ce64d8335aaa5997dac66e74b427527b879b31f0d97aef6f310c0cafb834" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.273620 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mp29k" event={"ID":"f79e7e45-faf1-4f97-a135-ce69d2bb7529","Type":"ContainerDied","Data":"8a79ce64d8335aaa5997dac66e74b427527b879b31f0d97aef6f310c0cafb834"} Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.283189 4606 generic.go:334] "Generic (PLEG): container finished" podID="f88968ca-e167-49a5-91ed-ee4f091c38bb" containerID="fabbf875f263e69f9bbbb036f5aa4cebc8984f405c2b1b881e914d40fcc02082" exitCode=0 Jan 23 17:09:05 crc kubenswrapper[4606]: I0123 17:09:05.283236 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5sjkh-config-v4kxw" event={"ID":"f88968ca-e167-49a5-91ed-ee4f091c38bb","Type":"ContainerDied","Data":"fabbf875f263e69f9bbbb036f5aa4cebc8984f405c2b1b881e914d40fcc02082"} Jan 23 17:09:06 crc kubenswrapper[4606]: I0123 17:09:06.426041 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" path="/var/lib/kubelet/pods/bb011d49-3a61-484d-8407-5e31513a3fea/volumes" Jan 23 17:09:06 crc kubenswrapper[4606]: I0123 17:09:06.658990 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5sjkh" Jan 23 17:09:06 crc kubenswrapper[4606]: I0123 17:09:06.851906 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-72r74" Jan 23 17:09:06 crc kubenswrapper[4606]: I0123 17:09:06.952442 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5kxz\" (UniqueName: \"kubernetes.io/projected/4e3ad36a-2eab-4985-beea-6483c3faae7d-kube-api-access-f5kxz\") pod \"4e3ad36a-2eab-4985-beea-6483c3faae7d\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " Jan 23 17:09:06 crc kubenswrapper[4606]: I0123 17:09:06.952574 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e3ad36a-2eab-4985-beea-6483c3faae7d-operator-scripts\") pod \"4e3ad36a-2eab-4985-beea-6483c3faae7d\" (UID: \"4e3ad36a-2eab-4985-beea-6483c3faae7d\") " Jan 23 17:09:06 crc kubenswrapper[4606]: I0123 17:09:06.953288 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e3ad36a-2eab-4985-beea-6483c3faae7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e3ad36a-2eab-4985-beea-6483c3faae7d" (UID: "4e3ad36a-2eab-4985-beea-6483c3faae7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:06 crc kubenswrapper[4606]: I0123 17:09:06.959229 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3ad36a-2eab-4985-beea-6483c3faae7d-kube-api-access-f5kxz" (OuterVolumeSpecName: "kube-api-access-f5kxz") pod "4e3ad36a-2eab-4985-beea-6483c3faae7d" (UID: "4e3ad36a-2eab-4985-beea-6483c3faae7d"). InnerVolumeSpecName "kube-api-access-f5kxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.054112 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e3ad36a-2eab-4985-beea-6483c3faae7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.054151 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5kxz\" (UniqueName: \"kubernetes.io/projected/4e3ad36a-2eab-4985-beea-6483c3faae7d-kube-api-access-f5kxz\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.113406 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.140229 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.155701 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.177168 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp29k" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.181891 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.184721 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2hhhx" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.206340 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259150 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run\") pod \"f88968ca-e167-49a5-91ed-ee4f091c38bb\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259196 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-log-ovn\") pod \"f88968ca-e167-49a5-91ed-ee4f091c38bb\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259248 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqfw7\" (UniqueName: \"kubernetes.io/projected/c6039461-4f70-4ddd-992f-778a1e7e6fb8-kube-api-access-nqfw7\") pod \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259273 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph9r9\" (UniqueName: \"kubernetes.io/projected/b67c3811-645f-4d09-b577-56a82395ec4c-kube-api-access-ph9r9\") pod \"b67c3811-645f-4d09-b577-56a82395ec4c\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259292 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-additional-scripts\") pod \"f88968ca-e167-49a5-91ed-ee4f091c38bb\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259317 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-scripts\") pod \"f88968ca-e167-49a5-91ed-ee4f091c38bb\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259400 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6039461-4f70-4ddd-992f-778a1e7e6fb8-operator-scripts\") pod \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\" (UID: \"c6039461-4f70-4ddd-992f-778a1e7e6fb8\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259426 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67c3811-645f-4d09-b577-56a82395ec4c-operator-scripts\") pod \"b67c3811-645f-4d09-b577-56a82395ec4c\" (UID: \"b67c3811-645f-4d09-b577-56a82395ec4c\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259458 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjs6b\" (UniqueName: \"kubernetes.io/projected/f88968ca-e167-49a5-91ed-ee4f091c38bb-kube-api-access-pjs6b\") pod \"f88968ca-e167-49a5-91ed-ee4f091c38bb\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259507 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run-ovn\") pod \"f88968ca-e167-49a5-91ed-ee4f091c38bb\" (UID: \"f88968ca-e167-49a5-91ed-ee4f091c38bb\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.259904 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f88968ca-e167-49a5-91ed-ee4f091c38bb" (UID: "f88968ca-e167-49a5-91ed-ee4f091c38bb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.260341 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67c3811-645f-4d09-b577-56a82395ec4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b67c3811-645f-4d09-b577-56a82395ec4c" (UID: "b67c3811-645f-4d09-b577-56a82395ec4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.260377 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f88968ca-e167-49a5-91ed-ee4f091c38bb" (UID: "f88968ca-e167-49a5-91ed-ee4f091c38bb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.260396 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run" (OuterVolumeSpecName: "var-run") pod "f88968ca-e167-49a5-91ed-ee4f091c38bb" (UID: "f88968ca-e167-49a5-91ed-ee4f091c38bb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.260803 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6039461-4f70-4ddd-992f-778a1e7e6fb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6039461-4f70-4ddd-992f-778a1e7e6fb8" (UID: "c6039461-4f70-4ddd-992f-778a1e7e6fb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.261313 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-scripts" (OuterVolumeSpecName: "scripts") pod "f88968ca-e167-49a5-91ed-ee4f091c38bb" (UID: "f88968ca-e167-49a5-91ed-ee4f091c38bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.262987 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f88968ca-e167-49a5-91ed-ee4f091c38bb" (UID: "f88968ca-e167-49a5-91ed-ee4f091c38bb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.267353 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6039461-4f70-4ddd-992f-778a1e7e6fb8-kube-api-access-nqfw7" (OuterVolumeSpecName: "kube-api-access-nqfw7") pod "c6039461-4f70-4ddd-992f-778a1e7e6fb8" (UID: "c6039461-4f70-4ddd-992f-778a1e7e6fb8"). InnerVolumeSpecName "kube-api-access-nqfw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.267786 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88968ca-e167-49a5-91ed-ee4f091c38bb-kube-api-access-pjs6b" (OuterVolumeSpecName: "kube-api-access-pjs6b") pod "f88968ca-e167-49a5-91ed-ee4f091c38bb" (UID: "f88968ca-e167-49a5-91ed-ee4f091c38bb"). InnerVolumeSpecName "kube-api-access-pjs6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.268807 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67c3811-645f-4d09-b577-56a82395ec4c-kube-api-access-ph9r9" (OuterVolumeSpecName: "kube-api-access-ph9r9") pod "b67c3811-645f-4d09-b577-56a82395ec4c" (UID: "b67c3811-645f-4d09-b577-56a82395ec4c"). InnerVolumeSpecName "kube-api-access-ph9r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.306692 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mp29k" event={"ID":"f79e7e45-faf1-4f97-a135-ce69d2bb7529","Type":"ContainerDied","Data":"10d1c06d413dd8152210e7712d1f6b9d4232263a1f5bf1b5ac82f6214025e63b"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.306730 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d1c06d413dd8152210e7712d1f6b9d4232263a1f5bf1b5ac82f6214025e63b" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.306778 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mp29k" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.311080 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5sjkh-config-v4kxw" event={"ID":"f88968ca-e167-49a5-91ed-ee4f091c38bb","Type":"ContainerDied","Data":"765d8f060d5796a01fcdfd259633d779976facefa22b61e92bfd40e2e330dc91"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.311112 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765d8f060d5796a01fcdfd259633d779976facefa22b61e92bfd40e2e330dc91" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.311146 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5sjkh-config-v4kxw" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.316220 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zhbb7" event={"ID":"d613fd23-da43-482f-945b-e72c58d9b691","Type":"ContainerDied","Data":"291eddcb13565e4994e17ca5292684668f8185a0926b18b4222ad7c55fbbd3c4"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.316264 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="291eddcb13565e4994e17ca5292684668f8185a0926b18b4222ad7c55fbbd3c4" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.316319 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zhbb7" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.318213 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a05b-account-create-update-s2mk9" event={"ID":"b67c3811-645f-4d09-b577-56a82395ec4c","Type":"ContainerDied","Data":"96326b7861378800fe17d13973ecbc32d37a7a6bef1937b0d3ba5554767bf326"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.318239 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96326b7861378800fe17d13973ecbc32d37a7a6bef1937b0d3ba5554767bf326" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.318284 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a05b-account-create-update-s2mk9" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.325006 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979e-account-create-update-xlx74" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.324999 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979e-account-create-update-xlx74" event={"ID":"c6039461-4f70-4ddd-992f-778a1e7e6fb8","Type":"ContainerDied","Data":"3536d90ebb389a6eb07fa0924b1a0bbb6c5f76dadbc383cf6b3974603c0a9ac2"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.325139 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3536d90ebb389a6eb07fa0924b1a0bbb6c5f76dadbc383cf6b3974603c0a9ac2" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.326669 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2hhhx" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.326943 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2hhhx" event={"ID":"414e99fb-9e21-431e-8296-e4f89723f459","Type":"ContainerDied","Data":"bc471f9da844b03367a1b17f986af2a0bfe7b4f16cc9826603f7ee0038015e1e"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.327038 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc471f9da844b03367a1b17f986af2a0bfe7b4f16cc9826603f7ee0038015e1e" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.328959 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fa10-account-create-update-xmwcv" event={"ID":"224fe191-a024-4e50-8a8a-8b46aef66847","Type":"ContainerDied","Data":"86d63d673c66450432226d806446d90e09d33969562d98815203959b2051080f"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.328977 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d63d673c66450432226d806446d90e09d33969562d98815203959b2051080f" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.329093 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fa10-account-create-update-xmwcv" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.330829 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-72r74" event={"ID":"4e3ad36a-2eab-4985-beea-6483c3faae7d","Type":"ContainerDied","Data":"497ff07d122774d89969b324c431da7073ead65ad381b91437b24735c8df2241"} Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.330855 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497ff07d122774d89969b324c431da7073ead65ad381b91437b24735c8df2241" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.330988 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-72r74" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.360627 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccfds\" (UniqueName: \"kubernetes.io/projected/d613fd23-da43-482f-945b-e72c58d9b691-kube-api-access-ccfds\") pod \"d613fd23-da43-482f-945b-e72c58d9b691\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.360703 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224fe191-a024-4e50-8a8a-8b46aef66847-operator-scripts\") pod \"224fe191-a024-4e50-8a8a-8b46aef66847\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.360742 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613fd23-da43-482f-945b-e72c58d9b691-operator-scripts\") pod \"d613fd23-da43-482f-945b-e72c58d9b691\" (UID: \"d613fd23-da43-482f-945b-e72c58d9b691\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.361172 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d613fd23-da43-482f-945b-e72c58d9b691-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d613fd23-da43-482f-945b-e72c58d9b691" (UID: "d613fd23-da43-482f-945b-e72c58d9b691"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.361273 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6gc5\" (UniqueName: \"kubernetes.io/projected/f79e7e45-faf1-4f97-a135-ce69d2bb7529-kube-api-access-x6gc5\") pod \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.361320 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qvmw\" (UniqueName: \"kubernetes.io/projected/414e99fb-9e21-431e-8296-e4f89723f459-kube-api-access-8qvmw\") pod \"414e99fb-9e21-431e-8296-e4f89723f459\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.361813 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414e99fb-9e21-431e-8296-e4f89723f459-operator-scripts\") pod \"414e99fb-9e21-431e-8296-e4f89723f459\" (UID: \"414e99fb-9e21-431e-8296-e4f89723f459\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.361886 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79e7e45-faf1-4f97-a135-ce69d2bb7529-operator-scripts\") pod \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\" (UID: \"f79e7e45-faf1-4f97-a135-ce69d2bb7529\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.361909 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sccs9\" (UniqueName: \"kubernetes.io/projected/224fe191-a024-4e50-8a8a-8b46aef66847-kube-api-access-sccs9\") pod \"224fe191-a024-4e50-8a8a-8b46aef66847\" (UID: \"224fe191-a024-4e50-8a8a-8b46aef66847\") " Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362288 4606 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362299 4606 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362308 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqfw7\" (UniqueName: \"kubernetes.io/projected/c6039461-4f70-4ddd-992f-778a1e7e6fb8-kube-api-access-nqfw7\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362317 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph9r9\" (UniqueName: \"kubernetes.io/projected/b67c3811-645f-4d09-b577-56a82395ec4c-kube-api-access-ph9r9\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362326 4606 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362334 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88968ca-e167-49a5-91ed-ee4f091c38bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362342 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d613fd23-da43-482f-945b-e72c58d9b691-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362350 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6039461-4f70-4ddd-992f-778a1e7e6fb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362358 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b67c3811-645f-4d09-b577-56a82395ec4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362366 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjs6b\" (UniqueName: \"kubernetes.io/projected/f88968ca-e167-49a5-91ed-ee4f091c38bb-kube-api-access-pjs6b\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.362373 4606 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f88968ca-e167-49a5-91ed-ee4f091c38bb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.363047 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224fe191-a024-4e50-8a8a-8b46aef66847-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "224fe191-a024-4e50-8a8a-8b46aef66847" (UID: "224fe191-a024-4e50-8a8a-8b46aef66847"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.363067 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414e99fb-9e21-431e-8296-e4f89723f459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "414e99fb-9e21-431e-8296-e4f89723f459" (UID: "414e99fb-9e21-431e-8296-e4f89723f459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.363475 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79e7e45-faf1-4f97-a135-ce69d2bb7529-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f79e7e45-faf1-4f97-a135-ce69d2bb7529" (UID: "f79e7e45-faf1-4f97-a135-ce69d2bb7529"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.368673 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414e99fb-9e21-431e-8296-e4f89723f459-kube-api-access-8qvmw" (OuterVolumeSpecName: "kube-api-access-8qvmw") pod "414e99fb-9e21-431e-8296-e4f89723f459" (UID: "414e99fb-9e21-431e-8296-e4f89723f459"). InnerVolumeSpecName "kube-api-access-8qvmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.368723 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d613fd23-da43-482f-945b-e72c58d9b691-kube-api-access-ccfds" (OuterVolumeSpecName: "kube-api-access-ccfds") pod "d613fd23-da43-482f-945b-e72c58d9b691" (UID: "d613fd23-da43-482f-945b-e72c58d9b691"). InnerVolumeSpecName "kube-api-access-ccfds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.368746 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224fe191-a024-4e50-8a8a-8b46aef66847-kube-api-access-sccs9" (OuterVolumeSpecName: "kube-api-access-sccs9") pod "224fe191-a024-4e50-8a8a-8b46aef66847" (UID: "224fe191-a024-4e50-8a8a-8b46aef66847"). InnerVolumeSpecName "kube-api-access-sccs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.369565 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79e7e45-faf1-4f97-a135-ce69d2bb7529-kube-api-access-x6gc5" (OuterVolumeSpecName: "kube-api-access-x6gc5") pod "f79e7e45-faf1-4f97-a135-ce69d2bb7529" (UID: "f79e7e45-faf1-4f97-a135-ce69d2bb7529"). InnerVolumeSpecName "kube-api-access-x6gc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.464530 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f79e7e45-faf1-4f97-a135-ce69d2bb7529-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.464570 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sccs9\" (UniqueName: \"kubernetes.io/projected/224fe191-a024-4e50-8a8a-8b46aef66847-kube-api-access-sccs9\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.464613 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccfds\" (UniqueName: \"kubernetes.io/projected/d613fd23-da43-482f-945b-e72c58d9b691-kube-api-access-ccfds\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.464628 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/224fe191-a024-4e50-8a8a-8b46aef66847-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.464640 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6gc5\" (UniqueName: \"kubernetes.io/projected/f79e7e45-faf1-4f97-a135-ce69d2bb7529-kube-api-access-x6gc5\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.464652 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qvmw\" (UniqueName: \"kubernetes.io/projected/414e99fb-9e21-431e-8296-e4f89723f459-kube-api-access-8qvmw\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:07 crc kubenswrapper[4606]: I0123 17:09:07.464662 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414e99fb-9e21-431e-8296-e4f89723f459-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:08 crc kubenswrapper[4606]: I0123 17:09:08.242332 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5sjkh-config-v4kxw"] Jan 23 17:09:08 crc kubenswrapper[4606]: I0123 17:09:08.269555 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5sjkh-config-v4kxw"] Jan 23 17:09:08 crc kubenswrapper[4606]: I0123 17:09:08.397799 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88968ca-e167-49a5-91ed-ee4f091c38bb" path="/var/lib/kubelet/pods/f88968ca-e167-49a5-91ed-ee4f091c38bb/volumes" Jan 23 17:09:08 crc kubenswrapper[4606]: I0123 17:09:08.398408 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"5e8372b538d0b153f17a37e707a25787d5bb16f70ca44229786d1ceb2818bb6c"} Jan 23 17:09:08 crc kubenswrapper[4606]: I0123 17:09:08.398428 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"5a15e35e978fad637a262e90c127679a4466646c616c8a001c93f89daaa7ee8e"} Jan 23 17:09:12 crc kubenswrapper[4606]: I0123 17:09:12.455850 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhnq4" event={"ID":"21c1c7ff-8437-4c18-ab38-7b6908fbe35c","Type":"ContainerStarted","Data":"b41d4e5cce1a1b43643708e0920cfc05a42b495fbe4dd38acf3882001122df28"} Jan 23 17:09:12 crc kubenswrapper[4606]: I0123 17:09:12.477722 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"6de20abaacaea215ac53a4fdc50adb2f07934bf193a8d54a424718e57753901c"} Jan 23 17:09:12 crc kubenswrapper[4606]: I0123 17:09:12.477762 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"4671b1d657cf53a6e6256a4be68bda45abe3d8ed661ff859857547e2d271ed01"} Jan 23 17:09:12 crc kubenswrapper[4606]: I0123 17:09:12.481658 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qhnq4" podStartSLOduration=6.973757617 podStartE2EDuration="14.481639629s" podCreationTimestamp="2026-01-23 17:08:58 +0000 UTC" firstStartedPulling="2026-01-23 17:09:03.933741266 +0000 UTC m=+977.933933672" lastFinishedPulling="2026-01-23 17:09:11.441623268 +0000 UTC m=+985.441815684" observedRunningTime="2026-01-23 17:09:12.477379403 +0000 UTC m=+986.477571829" watchObservedRunningTime="2026-01-23 17:09:12.481639629 +0000 UTC m=+986.481832035" Jan 23 17:09:13 crc kubenswrapper[4606]: I0123 17:09:13.511034 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"09524644e6fa69f4dde58ece576207f968b26ee255bb01ff50045b72339b0970"} Jan 23 17:09:14 crc kubenswrapper[4606]: I0123 17:09:14.520269 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"861351818d07c8c15cc9b04c6942aafc67fed479c196f2849f6336151fe1a9b5"} Jan 23 17:09:14 crc kubenswrapper[4606]: I0123 17:09:14.520628 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"fbbd5c0e12ba61d052dc6c207d412444a9c7bc4a6a946427715b1164b75e2929"} Jan 23 17:09:14 crc kubenswrapper[4606]: I0123 17:09:14.520645 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"ecec422de1acde6d5697c266e199a2523681e71f7c5fa9d21e35e8017dbf4111"} Jan 23 17:09:15 crc kubenswrapper[4606]: I0123 17:09:15.533135 4606 generic.go:334] "Generic (PLEG): container finished" podID="21c1c7ff-8437-4c18-ab38-7b6908fbe35c" containerID="b41d4e5cce1a1b43643708e0920cfc05a42b495fbe4dd38acf3882001122df28" exitCode=0 Jan 23 17:09:15 crc kubenswrapper[4606]: I0123 17:09:15.534182 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhnq4" event={"ID":"21c1c7ff-8437-4c18-ab38-7b6908fbe35c","Type":"ContainerDied","Data":"b41d4e5cce1a1b43643708e0920cfc05a42b495fbe4dd38acf3882001122df28"} Jan 23 17:09:16 crc kubenswrapper[4606]: I0123 17:09:16.550431 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"1ea283170bab35679d7e15b54a04c08136e0a2c6a3fb7a97bd082dc683fa90da"} Jan 23 17:09:16 crc kubenswrapper[4606]: I0123 17:09:16.550476 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"2bc80b0a11e180b46294ec5585a08a31db81bd4ca49891db22c4167676627656"} Jan 23 17:09:16 crc kubenswrapper[4606]: I0123 17:09:16.550486 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"a029916e3b6dd42a757f77a67ae36a250d080a59a287df8a16085f905e2743c6"} Jan 23 17:09:16 crc kubenswrapper[4606]: I0123 17:09:16.550495 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"7d5e7bb876288bf1e8be1aee002e2e364b5b55fc56748a4f1e7db9d4afa15e5a"} Jan 23 17:09:16 crc kubenswrapper[4606]: I0123 17:09:16.550505 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"618c3fb71828835e2974dc45030f0ea223f7482e15a7be7c8f9a74fed43fd407"} Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.124336 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.137782 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p8hwf"] Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138097 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="registry-server" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138113 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="registry-server" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138122 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224fe191-a024-4e50-8a8a-8b46aef66847" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138130 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="224fe191-a024-4e50-8a8a-8b46aef66847" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138142 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c1c7ff-8437-4c18-ab38-7b6908fbe35c" containerName="keystone-db-sync" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138149 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c1c7ff-8437-4c18-ab38-7b6908fbe35c" containerName="keystone-db-sync" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138158 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="extract-content" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138165 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="extract-content" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138171 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="extract-content" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138177 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="extract-content" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138189 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="extract-utilities" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138194 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="extract-utilities" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138206 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67c3811-645f-4d09-b577-56a82395ec4c" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138212 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67c3811-645f-4d09-b577-56a82395ec4c" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138223 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88968ca-e167-49a5-91ed-ee4f091c38bb" containerName="ovn-config" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138228 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88968ca-e167-49a5-91ed-ee4f091c38bb" containerName="ovn-config" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138241 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79e7e45-faf1-4f97-a135-ce69d2bb7529" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138247 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79e7e45-faf1-4f97-a135-ce69d2bb7529" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138255 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d613fd23-da43-482f-945b-e72c58d9b691" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138260 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="d613fd23-da43-482f-945b-e72c58d9b691" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138273 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26de21c-1da5-42d4-9dcd-0e6415109f94" containerName="swift-ring-rebalance" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138280 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26de21c-1da5-42d4-9dcd-0e6415109f94" containerName="swift-ring-rebalance" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138290 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="registry-server" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138296 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="registry-server" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138310 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3ad36a-2eab-4985-beea-6483c3faae7d" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138317 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3ad36a-2eab-4985-beea-6483c3faae7d" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138327 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414e99fb-9e21-431e-8296-e4f89723f459" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138334 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="414e99fb-9e21-431e-8296-e4f89723f459" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138344 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6039461-4f70-4ddd-992f-778a1e7e6fb8" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138350 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6039461-4f70-4ddd-992f-778a1e7e6fb8" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: E0123 17:09:17.138360 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="extract-utilities" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138366 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="extract-utilities" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138496 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67c3811-645f-4d09-b577-56a82395ec4c" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138511 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6039461-4f70-4ddd-992f-778a1e7e6fb8" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138518 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="224fe191-a024-4e50-8a8a-8b46aef66847" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138526 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c1c7ff-8437-4c18-ab38-7b6908fbe35c" containerName="keystone-db-sync" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138534 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79e7e45-faf1-4f97-a135-ce69d2bb7529" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138539 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="d613fd23-da43-482f-945b-e72c58d9b691" containerName="mariadb-account-create-update" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138547 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb011d49-3a61-484d-8407-5e31513a3fea" containerName="registry-server" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138553 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88968ca-e167-49a5-91ed-ee4f091c38bb" containerName="ovn-config" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138559 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="414e99fb-9e21-431e-8296-e4f89723f459" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138568 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3ad36a-2eab-4985-beea-6483c3faae7d" containerName="mariadb-database-create" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138577 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26de21c-1da5-42d4-9dcd-0e6415109f94" containerName="swift-ring-rebalance" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.138608 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="87401ebd-0396-4ae2-9231-b2dbb28e308e" containerName="registry-server" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.143803 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.150839 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8hwf"] Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.182067 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhwcs\" (UniqueName: \"kubernetes.io/projected/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-kube-api-access-hhwcs\") pod \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.182147 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-config-data\") pod \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.182292 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-combined-ca-bundle\") pod \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\" (UID: \"21c1c7ff-8437-4c18-ab38-7b6908fbe35c\") " Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.193794 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-kube-api-access-hhwcs" (OuterVolumeSpecName: "kube-api-access-hhwcs") pod "21c1c7ff-8437-4c18-ab38-7b6908fbe35c" (UID: "21c1c7ff-8437-4c18-ab38-7b6908fbe35c"). InnerVolumeSpecName "kube-api-access-hhwcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.211706 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21c1c7ff-8437-4c18-ab38-7b6908fbe35c" (UID: "21c1c7ff-8437-4c18-ab38-7b6908fbe35c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.234849 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-config-data" (OuterVolumeSpecName: "config-data") pod "21c1c7ff-8437-4c18-ab38-7b6908fbe35c" (UID: "21c1c7ff-8437-4c18-ab38-7b6908fbe35c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.284161 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vd2b\" (UniqueName: \"kubernetes.io/projected/fd529aec-5f6a-46dd-a04e-f97e47afc754-kube-api-access-6vd2b\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.284259 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-catalog-content\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.284305 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-utilities\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.284422 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhwcs\" (UniqueName: \"kubernetes.io/projected/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-kube-api-access-hhwcs\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.284437 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.284448 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c1c7ff-8437-4c18-ab38-7b6908fbe35c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.386265 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vd2b\" (UniqueName: \"kubernetes.io/projected/fd529aec-5f6a-46dd-a04e-f97e47afc754-kube-api-access-6vd2b\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.386347 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-catalog-content\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.386383 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-utilities\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.387037 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-catalog-content\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.387365 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-utilities\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.411505 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vd2b\" (UniqueName: \"kubernetes.io/projected/fd529aec-5f6a-46dd-a04e-f97e47afc754-kube-api-access-6vd2b\") pod \"redhat-marketplace-p8hwf\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.466124 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.560533 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qhnq4" event={"ID":"21c1c7ff-8437-4c18-ab38-7b6908fbe35c","Type":"ContainerDied","Data":"77b27e905f3761d3ccf505e5516bfe2fd040b0682339e2954100b41ed06ecc7e"} Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.560573 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77b27e905f3761d3ccf505e5516bfe2fd040b0682339e2954100b41ed06ecc7e" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.560676 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qhnq4" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.571043 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"ae946fa0a7320e4e788d8c0132add28728a057e5b80dd531322eb2b663ad454e"} Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.571083 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"06f54b44-852a-4700-ac26-e9d89a545f28","Type":"ContainerStarted","Data":"7ac2d0758e6d452895532006e09c484738a4c8d7f42af1cca3d7032fc868f85c"} Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.649750 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.800777378 podStartE2EDuration="55.649727403s" podCreationTimestamp="2026-01-23 17:08:22 +0000 UTC" firstStartedPulling="2026-01-23 17:09:04.579867895 +0000 UTC m=+978.580060301" lastFinishedPulling="2026-01-23 17:09:15.4288179 +0000 UTC m=+989.429010326" observedRunningTime="2026-01-23 17:09:17.637055767 +0000 UTC m=+991.637248423" watchObservedRunningTime="2026-01-23 17:09:17.649727403 +0000 UTC m=+991.649919809" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.874426 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-khq64"] Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.881453 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.931901 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-khq64"] Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.966648 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-v5qb8"] Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.968048 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.978151 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.978664 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cxp6x" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.979924 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.983805 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 17:09:17 crc kubenswrapper[4606]: I0123 17:09:17.984006 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.001113 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v5qb8"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016499 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6wpp\" (UniqueName: \"kubernetes.io/projected/3d822de6-0f77-4a88-9538-965bf2079ed9-kube-api-access-t6wpp\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016564 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-config\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016646 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm69j\" (UniqueName: \"kubernetes.io/projected/2661fe67-d059-4581-a333-8957be8402e8-kube-api-access-vm69j\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016679 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-fernet-keys\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016702 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-combined-ca-bundle\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016718 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016735 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-dns-svc\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016763 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-config-data\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016788 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016809 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-scripts\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.016824 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-credential-keys\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126430 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-fernet-keys\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126481 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-combined-ca-bundle\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126501 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126523 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-dns-svc\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126554 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-config-data\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126597 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126620 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-scripts\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126656 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-credential-keys\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126676 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wpp\" (UniqueName: \"kubernetes.io/projected/3d822de6-0f77-4a88-9538-965bf2079ed9-kube-api-access-t6wpp\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126702 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-config\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.126752 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm69j\" (UniqueName: \"kubernetes.io/projected/2661fe67-d059-4581-a333-8957be8402e8-kube-api-access-vm69j\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.128170 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.129580 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-dns-svc\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.130124 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.135457 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-config\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.137979 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8hwf"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.139155 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-scripts\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.142384 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-fernet-keys\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.142879 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-combined-ca-bundle\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.152123 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-config-data\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.152486 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-credential-keys\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.164806 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm69j\" (UniqueName: \"kubernetes.io/projected/2661fe67-d059-4581-a333-8957be8402e8-kube-api-access-vm69j\") pod \"dnsmasq-dns-f877ddd87-khq64\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.180735 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wpp\" (UniqueName: \"kubernetes.io/projected/3d822de6-0f77-4a88-9538-965bf2079ed9-kube-api-access-t6wpp\") pod \"keystone-bootstrap-v5qb8\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.249073 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.274669 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-khq64"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.308654 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dc4f4847f-hcqjj"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.309223 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.310013 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.316438 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.326084 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.329920 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tjf95"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.337170 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.337336 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7b9bg" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.366602 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.370803 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.415222 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rrnn" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.415977 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.416882 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.433066 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.447615 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.450247 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562819 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-logs\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562863 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-db-sync-config-data\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562887 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-horizon-secret-key\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562910 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562941 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbkwm\" (UniqueName: \"kubernetes.io/projected/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-kube-api-access-vbkwm\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562958 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-scripts\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562981 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-scripts\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.562998 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2xhr\" (UniqueName: \"kubernetes.io/projected/a5feb913-77c6-425c-9018-de42010da64e-kube-api-access-m2xhr\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563019 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfdgz\" (UniqueName: \"kubernetes.io/projected/3f5144cd-47f2-4ce5-81d6-68884dbfc739-kube-api-access-cfdgz\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563042 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-scripts\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563098 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-combined-ca-bundle\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563116 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-config-data\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563135 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563154 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f5144cd-47f2-4ce5-81d6-68884dbfc739-etc-machine-id\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563175 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-log-httpd\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563200 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-run-httpd\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563252 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-config-data\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.563282 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-config-data\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.572331 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-7h69g"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.582750 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dc4f4847f-hcqjj"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.582795 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tjf95"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.582890 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.599405 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.621169 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.666185 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8hwf" event={"ID":"fd529aec-5f6a-46dd-a04e-f97e47afc754","Type":"ContainerStarted","Data":"eddab6abc7567c0bb72346f8b002371a92a724021461f4c02f01e087e0c133f0"} Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667225 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-scripts\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667255 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2xhr\" (UniqueName: \"kubernetes.io/projected/a5feb913-77c6-425c-9018-de42010da64e-kube-api-access-m2xhr\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667276 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfdgz\" (UniqueName: \"kubernetes.io/projected/3f5144cd-47f2-4ce5-81d6-68884dbfc739-kube-api-access-cfdgz\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667298 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-scripts\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667348 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-combined-ca-bundle\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667366 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-config-data\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667385 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667406 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f5144cd-47f2-4ce5-81d6-68884dbfc739-etc-machine-id\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667424 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-log-httpd\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667445 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-run-httpd\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667473 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-config-data\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667492 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-config-data\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667511 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-logs\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667531 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-db-sync-config-data\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667547 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-horizon-secret-key\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667568 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667612 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbkwm\" (UniqueName: \"kubernetes.io/projected/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-kube-api-access-vbkwm\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.667630 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-scripts\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.672419 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-scripts\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.688466 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f5144cd-47f2-4ce5-81d6-68884dbfc739-etc-machine-id\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.689515 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-config-data\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.691094 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-logs\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.694761 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-scripts\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.715668 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-log-httpd\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.721938 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-run-httpd\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.759233 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-db-sync-config-data\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.761825 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-scripts\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.762940 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-7h69g"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.769495 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87f25\" (UniqueName: \"kubernetes.io/projected/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-kube-api-access-87f25\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.769673 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.769767 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-config\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.769803 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-svc\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.769893 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.769917 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.776479 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2xhr\" (UniqueName: \"kubernetes.io/projected/a5feb913-77c6-425c-9018-de42010da64e-kube-api-access-m2xhr\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.789331 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-horizon-secret-key\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.790403 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.790965 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.796417 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-config-data\") pod \"ceilometer-0\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.799563 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-config-data\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.801364 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfdgz\" (UniqueName: \"kubernetes.io/projected/3f5144cd-47f2-4ce5-81d6-68884dbfc739-kube-api-access-cfdgz\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.801951 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbkwm\" (UniqueName: \"kubernetes.io/projected/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-kube-api-access-vbkwm\") pod \"horizon-7dc4f4847f-hcqjj\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.813662 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bmv5b"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.814815 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.816289 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-combined-ca-bundle\") pod \"cinder-db-sync-tjf95\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.829494 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bmv5b"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.837969 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ffzkl" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.838174 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.854080 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.855009 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.878804 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lzz5x"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880156 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880627 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-combined-ca-bundle\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880702 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880745 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2b8w\" (UniqueName: \"kubernetes.io/projected/fb6d384e-69de-4bd3-bb7b-bf9346614fce-kube-api-access-f2b8w\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880783 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-config\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880808 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-svc\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880833 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-config\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880883 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880902 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.880922 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87f25\" (UniqueName: \"kubernetes.io/projected/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-kube-api-access-87f25\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.882694 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-svc\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.883370 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-config\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.884205 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.884277 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.884285 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.901100 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.901203 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5kppl" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.903224 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bfvh4"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.904372 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.907884 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.908186 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7sdg2" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.910054 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.914296 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lzz5x"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.952159 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.953706 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87f25\" (UniqueName: \"kubernetes.io/projected/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-kube-api-access-87f25\") pod \"dnsmasq-dns-5959f8865f-7h69g\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.976072 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjf95" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.976429 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bfvh4"] Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983289 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2b8w\" (UniqueName: \"kubernetes.io/projected/fb6d384e-69de-4bd3-bb7b-bf9346614fce-kube-api-access-f2b8w\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983354 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983379 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-combined-ca-bundle\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983425 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf67b\" (UniqueName: \"kubernetes.io/projected/4cb98288-9db7-4a8f-9b1d-1d57a547062b-kube-api-access-sf67b\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983450 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-config\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983474 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-scripts\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983497 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-combined-ca-bundle\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983520 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-db-sync-config-data\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983570 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljdw\" (UniqueName: \"kubernetes.io/projected/139f2d63-a754-454b-8fe5-fb33c30ec2bf-kube-api-access-fljdw\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983608 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-combined-ca-bundle\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:18 crc kubenswrapper[4606]: I0123 17:09:18.983677 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139f2d63-a754-454b-8fe5-fb33c30ec2bf-logs\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.007489 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-combined-ca-bundle\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.017684 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-config\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.056119 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68db5566bf-2m8sd"] Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.057416 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.073782 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2b8w\" (UniqueName: \"kubernetes.io/projected/fb6d384e-69de-4bd3-bb7b-bf9346614fce-kube-api-access-f2b8w\") pod \"neutron-db-sync-bmv5b\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085265 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-scripts\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085311 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139f2d63-a754-454b-8fe5-fb33c30ec2bf-logs\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085329 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcwwz\" (UniqueName: \"kubernetes.io/projected/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-kube-api-access-hcwwz\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085369 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085399 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-combined-ca-bundle\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085418 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-config-data\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085439 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf67b\" (UniqueName: \"kubernetes.io/projected/4cb98288-9db7-4a8f-9b1d-1d57a547062b-kube-api-access-sf67b\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085454 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-logs\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085513 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-scripts\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085540 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-combined-ca-bundle\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085561 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-db-sync-config-data\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085604 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljdw\" (UniqueName: \"kubernetes.io/projected/139f2d63-a754-454b-8fe5-fb33c30ec2bf-kube-api-access-fljdw\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.085689 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-horizon-secret-key\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.090315 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-scripts\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.092301 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.093279 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139f2d63-a754-454b-8fe5-fb33c30ec2bf-logs\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.107581 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68db5566bf-2m8sd"] Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.120859 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-combined-ca-bundle\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.131156 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-db-sync-config-data\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.142263 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-combined-ca-bundle\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.168245 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljdw\" (UniqueName: \"kubernetes.io/projected/139f2d63-a754-454b-8fe5-fb33c30ec2bf-kube-api-access-fljdw\") pod \"placement-db-sync-bfvh4\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.187151 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-horizon-secret-key\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.187205 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-scripts\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.187234 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcwwz\" (UniqueName: \"kubernetes.io/projected/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-kube-api-access-hcwwz\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.187273 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-config-data\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.187300 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-logs\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.187756 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-logs\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.189752 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-scripts\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.190639 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-config-data\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.191789 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf67b\" (UniqueName: \"kubernetes.io/projected/4cb98288-9db7-4a8f-9b1d-1d57a547062b-kube-api-access-sf67b\") pod \"barbican-db-sync-lzz5x\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.192010 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-horizon-secret-key\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.208573 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-7h69g"] Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.209324 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.262117 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcwwz\" (UniqueName: \"kubernetes.io/projected/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-kube-api-access-hcwwz\") pod \"horizon-68db5566bf-2m8sd\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.328970 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.354647 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qm9ff"] Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.355063 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.355999 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.373289 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfvh4" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.412138 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.424827 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qm9ff"] Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.493353 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.493735 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-config\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.493752 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxrk\" (UniqueName: \"kubernetes.io/projected/837e5e5e-4078-45ed-9c4e-bf691a2840ed-kube-api-access-8bxrk\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.493780 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.493808 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.493830 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.598556 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.598665 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-config\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.598687 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxrk\" (UniqueName: \"kubernetes.io/projected/837e5e5e-4078-45ed-9c4e-bf691a2840ed-kube-api-access-8bxrk\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.598721 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.598757 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.598790 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.605549 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.619071 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-config\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.619654 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.620321 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.620826 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.655088 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxrk\" (UniqueName: \"kubernetes.io/projected/837e5e5e-4078-45ed-9c4e-bf691a2840ed-kube-api-access-8bxrk\") pod \"dnsmasq-dns-58dd9ff6bc-qm9ff\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.682623 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.747823 4606 generic.go:334] "Generic (PLEG): container finished" podID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerID="ad9a0843f01fba1c0234e4ce522bb7192d5f99bb85726730c6c84b549ea51b0d" exitCode=0 Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.747861 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8hwf" event={"ID":"fd529aec-5f6a-46dd-a04e-f97e47afc754","Type":"ContainerDied","Data":"ad9a0843f01fba1c0234e4ce522bb7192d5f99bb85726730c6c84b549ea51b0d"} Jan 23 17:09:19 crc kubenswrapper[4606]: I0123 17:09:19.897633 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-khq64"] Jan 23 17:09:20 crc kubenswrapper[4606]: I0123 17:09:20.083974 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-v5qb8"] Jan 23 17:09:20 crc kubenswrapper[4606]: I0123 17:09:20.618273 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dc4f4847f-hcqjj"] Jan 23 17:09:20 crc kubenswrapper[4606]: I0123 17:09:20.667162 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:09:20 crc kubenswrapper[4606]: I0123 17:09:20.781440 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-khq64" event={"ID":"2661fe67-d059-4581-a333-8957be8402e8","Type":"ContainerStarted","Data":"9b03e287ddcbec7644e1e39ea0feb7b076a216c494959cf1efb58f0fcd55e5d8"} Jan 23 17:09:20 crc kubenswrapper[4606]: I0123 17:09:20.788746 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc4f4847f-hcqjj" event={"ID":"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a","Type":"ContainerStarted","Data":"6ce1984899706ad70e3ff27b2e2a5bc29bb7cc8bc656f080266155ee19515eed"} Jan 23 17:09:20 crc kubenswrapper[4606]: I0123 17:09:20.796161 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerStarted","Data":"eba33893efd034eb9f3b29b199abb8931727696f2ca306c287781674b7308b49"} Jan 23 17:09:20 crc kubenswrapper[4606]: I0123 17:09:20.801968 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v5qb8" event={"ID":"3d822de6-0f77-4a88-9538-965bf2079ed9","Type":"ContainerStarted","Data":"427c3274cb1bb173059535a0f77301049297117884949f3e5a43e797dca01ed7"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.050162 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tjf95"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.062837 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bfvh4"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.097492 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lzz5x"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.112477 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68db5566bf-2m8sd"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.235541 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bmv5b"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.257625 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-7h69g"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.286879 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qm9ff"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.839755 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68db5566bf-2m8sd"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.860917 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lzz5x" event={"ID":"4cb98288-9db7-4a8f-9b1d-1d57a547062b","Type":"ContainerStarted","Data":"12741f81ab145166d18d791235189ede7e65f58afda33e2021c58822067741ee"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.865527 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjf95" event={"ID":"3f5144cd-47f2-4ce5-81d6-68884dbfc739","Type":"ContainerStarted","Data":"4abde82dd98568e9a42b34947e3a20f8c0b41a3ffee283cc5266839d7b8cf173"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.873677 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfvh4" event={"ID":"139f2d63-a754-454b-8fe5-fb33c30ec2bf","Type":"ContainerStarted","Data":"43cceeb37c9bfc58a9cda2405847779a6a7d7a376eab46b473b1e1fec38b2d7a"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.876467 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-7h69g" event={"ID":"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe","Type":"ContainerStarted","Data":"a3973e01131c98a915c867895e3a902269c2759142a582a5bc22fd905821423d"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.887806 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmv5b" event={"ID":"fb6d384e-69de-4bd3-bb7b-bf9346614fce","Type":"ContainerStarted","Data":"59451d410bba10279b24553faef92a7c8fff3f7d8ded7f45cf3e5f85b7dc13c2"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.895678 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b7bf8f6df-4w647"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.897051 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.897794 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" event={"ID":"837e5e5e-4078-45ed-9c4e-bf691a2840ed","Type":"ContainerStarted","Data":"0a5e6f443ae43745fab52c2437a77a33c82bb0bf2cf8a260399696eec9a244e6"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.916533 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68db5566bf-2m8sd" event={"ID":"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c","Type":"ContainerStarted","Data":"ff5a3b7203ba5d72786e5ac0941df9daa6f8e3903bdd11939eac9774672e5641"} Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.925944 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7bf8f6df-4w647"] Jan 23 17:09:21 crc kubenswrapper[4606]: I0123 17:09:21.961253 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.012508 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-scripts\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.012576 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-config-data\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.012627 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj762\" (UniqueName: \"kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.012672 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e23f5510-b5f6-42a1-b751-caa502fba254-logs\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.012709 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e23f5510-b5f6-42a1-b751-caa502fba254-horizon-secret-key\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.114167 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj762\" (UniqueName: \"kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.114268 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e23f5510-b5f6-42a1-b751-caa502fba254-logs\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.114326 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e23f5510-b5f6-42a1-b751-caa502fba254-horizon-secret-key\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.114412 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-scripts\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.114465 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-config-data\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.114918 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e23f5510-b5f6-42a1-b751-caa502fba254-logs\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.115435 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-scripts\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.115904 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-config-data\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.118652 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e23f5510-b5f6-42a1-b751-caa502fba254-horizon-secret-key\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.131833 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj762\" (UniqueName: \"kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762\") pod \"horizon-7b7bf8f6df-4w647\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.235122 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.951053 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v5qb8" event={"ID":"3d822de6-0f77-4a88-9538-965bf2079ed9","Type":"ContainerStarted","Data":"cce42ffa99cfa6746f3fe53d9751dd176d42f3b1f3eb90e8bb8736ceda609654"} Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.956470 4606 generic.go:334] "Generic (PLEG): container finished" podID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerID="67e20a7e66c7596087c2c7c42e083ece7cd51d1110914a2eaa5d62fd519f1566" exitCode=0 Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.956519 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" event={"ID":"837e5e5e-4078-45ed-9c4e-bf691a2840ed","Type":"ContainerDied","Data":"67e20a7e66c7596087c2c7c42e083ece7cd51d1110914a2eaa5d62fd519f1566"} Jan 23 17:09:22 crc kubenswrapper[4606]: I0123 17:09:22.986067 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-v5qb8" podStartSLOduration=5.986051632 podStartE2EDuration="5.986051632s" podCreationTimestamp="2026-01-23 17:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:09:22.984992216 +0000 UTC m=+996.985184622" watchObservedRunningTime="2026-01-23 17:09:22.986051632 +0000 UTC m=+996.986244038" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.008253 4606 generic.go:334] "Generic (PLEG): container finished" podID="2661fe67-d059-4581-a333-8957be8402e8" containerID="9dc448efce9df234cced9ddab98d263f439dbade971d8a314f9ca2e0bd8a81ad" exitCode=0 Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.008314 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-khq64" event={"ID":"2661fe67-d059-4581-a333-8957be8402e8","Type":"ContainerDied","Data":"9dc448efce9df234cced9ddab98d263f439dbade971d8a314f9ca2e0bd8a81ad"} Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.115468 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vzbb" event={"ID":"5b1b5e49-bdba-4463-b025-fc30bedd63f0","Type":"ContainerStarted","Data":"7facb45390c548164fd91699de258f3b275995c36e5391edf94c935531598be3"} Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.149217 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmv5b" event={"ID":"fb6d384e-69de-4bd3-bb7b-bf9346614fce","Type":"ContainerStarted","Data":"f5a35f4a31f2db0c926042d0e165f78ba5ced4fc693ed5b73c6b4e0440d6c6bb"} Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.151855 4606 generic.go:334] "Generic (PLEG): container finished" podID="8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" containerID="c65599ef7630b2e9678dbba87b8343b8907490a354a2a2c2c23af8a484bf5b56" exitCode=0 Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.151896 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-7h69g" event={"ID":"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe","Type":"ContainerDied","Data":"c65599ef7630b2e9678dbba87b8343b8907490a354a2a2c2c23af8a484bf5b56"} Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.182406 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bmv5b" podStartSLOduration=5.182384847 podStartE2EDuration="5.182384847s" podCreationTimestamp="2026-01-23 17:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:09:23.178815439 +0000 UTC m=+997.179007865" watchObservedRunningTime="2026-01-23 17:09:23.182384847 +0000 UTC m=+997.182577253" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.184407 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5vzbb" podStartSLOduration=7.034766163 podStartE2EDuration="42.184399498s" podCreationTimestamp="2026-01-23 17:08:41 +0000 UTC" firstStartedPulling="2026-01-23 17:08:42.668004515 +0000 UTC m=+956.668196921" lastFinishedPulling="2026-01-23 17:09:17.81763785 +0000 UTC m=+991.817830256" observedRunningTime="2026-01-23 17:09:23.138742949 +0000 UTC m=+997.138935355" watchObservedRunningTime="2026-01-23 17:09:23.184399498 +0000 UTC m=+997.184591904" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.390054 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7bf8f6df-4w647"] Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.722104 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.854565 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-config\") pod \"2661fe67-d059-4581-a333-8957be8402e8\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.854646 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-nb\") pod \"2661fe67-d059-4581-a333-8957be8402e8\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.854680 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-sb\") pod \"2661fe67-d059-4581-a333-8957be8402e8\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.854800 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-dns-svc\") pod \"2661fe67-d059-4581-a333-8957be8402e8\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.854853 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm69j\" (UniqueName: \"kubernetes.io/projected/2661fe67-d059-4581-a333-8957be8402e8-kube-api-access-vm69j\") pod \"2661fe67-d059-4581-a333-8957be8402e8\" (UID: \"2661fe67-d059-4581-a333-8957be8402e8\") " Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.877994 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2661fe67-d059-4581-a333-8957be8402e8-kube-api-access-vm69j" (OuterVolumeSpecName: "kube-api-access-vm69j") pod "2661fe67-d059-4581-a333-8957be8402e8" (UID: "2661fe67-d059-4581-a333-8957be8402e8"). InnerVolumeSpecName "kube-api-access-vm69j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.878859 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm69j\" (UniqueName: \"kubernetes.io/projected/2661fe67-d059-4581-a333-8957be8402e8-kube-api-access-vm69j\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.904433 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2661fe67-d059-4581-a333-8957be8402e8" (UID: "2661fe67-d059-4581-a333-8957be8402e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.921484 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2661fe67-d059-4581-a333-8957be8402e8" (UID: "2661fe67-d059-4581-a333-8957be8402e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.977658 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-config" (OuterVolumeSpecName: "config") pod "2661fe67-d059-4581-a333-8957be8402e8" (UID: "2661fe67-d059-4581-a333-8957be8402e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.978392 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2661fe67-d059-4581-a333-8957be8402e8" (UID: "2661fe67-d059-4581-a333-8957be8402e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.985617 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.985655 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.985668 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:23 crc kubenswrapper[4606]: I0123 17:09:23.985713 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2661fe67-d059-4581-a333-8957be8402e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.098977 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.178038 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-7h69g" event={"ID":"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe","Type":"ContainerDied","Data":"a3973e01131c98a915c867895e3a902269c2759142a582a5bc22fd905821423d"} Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.178088 4606 scope.go:117] "RemoveContainer" containerID="c65599ef7630b2e9678dbba87b8343b8907490a354a2a2c2c23af8a484bf5b56" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.178190 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-7h69g" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.185640 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7bf8f6df-4w647" event={"ID":"e23f5510-b5f6-42a1-b751-caa502fba254","Type":"ContainerStarted","Data":"7b16d14a873d06bb4dae1a85b68767f50b5af2ec6f2fa642afd557d005bebcfc"} Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.189302 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-swift-storage-0\") pod \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.189401 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-config\") pod \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.189529 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-nb\") pod \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.189547 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87f25\" (UniqueName: \"kubernetes.io/projected/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-kube-api-access-87f25\") pod \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.189712 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-svc\") pod \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.189734 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-sb\") pod \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\" (UID: \"8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe\") " Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.193104 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" event={"ID":"837e5e5e-4078-45ed-9c4e-bf691a2840ed","Type":"ContainerStarted","Data":"b839f40f6ca194f9235c96180eaf6b787694c40fbbe0cc772dc4e2040840bb9e"} Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.194502 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.199285 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-kube-api-access-87f25" (OuterVolumeSpecName: "kube-api-access-87f25") pod "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" (UID: "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe"). InnerVolumeSpecName "kube-api-access-87f25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.204767 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-khq64" event={"ID":"2661fe67-d059-4581-a333-8957be8402e8","Type":"ContainerDied","Data":"9b03e287ddcbec7644e1e39ea0feb7b076a216c494959cf1efb58f0fcd55e5d8"} Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.204852 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-khq64" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.229479 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" (UID: "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.230231 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" (UID: "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.232661 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" podStartSLOduration=6.232638113 podStartE2EDuration="6.232638113s" podCreationTimestamp="2026-01-23 17:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:09:24.211070705 +0000 UTC m=+998.211263111" watchObservedRunningTime="2026-01-23 17:09:24.232638113 +0000 UTC m=+998.232830519" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.241490 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" (UID: "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.243295 4606 generic.go:334] "Generic (PLEG): container finished" podID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerID="cd5dd3c45ec22e46b95d137ac579ca7048641f3df0d73fa6138b1ea57ff15880" exitCode=0 Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.243551 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8hwf" event={"ID":"fd529aec-5f6a-46dd-a04e-f97e47afc754","Type":"ContainerDied","Data":"cd5dd3c45ec22e46b95d137ac579ca7048641f3df0d73fa6138b1ea57ff15880"} Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.269918 4606 scope.go:117] "RemoveContainer" containerID="9dc448efce9df234cced9ddab98d263f439dbade971d8a314f9ca2e0bd8a81ad" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.293856 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.293905 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.293914 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87f25\" (UniqueName: \"kubernetes.io/projected/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-kube-api-access-87f25\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.293925 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.309083 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-config" (OuterVolumeSpecName: "config") pod "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" (UID: "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.309660 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" (UID: "8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.357659 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-khq64"] Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.396008 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.396035 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.412094 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-khq64"] Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.544422 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-7h69g"] Jan 23 17:09:24 crc kubenswrapper[4606]: I0123 17:09:24.564040 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-7h69g"] Jan 23 17:09:26 crc kubenswrapper[4606]: I0123 17:09:26.302529 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8hwf" event={"ID":"fd529aec-5f6a-46dd-a04e-f97e47afc754","Type":"ContainerStarted","Data":"b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3"} Jan 23 17:09:26 crc kubenswrapper[4606]: I0123 17:09:26.337457 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p8hwf" podStartSLOduration=3.685276568 podStartE2EDuration="9.337436342s" podCreationTimestamp="2026-01-23 17:09:17 +0000 UTC" firstStartedPulling="2026-01-23 17:09:19.784082008 +0000 UTC m=+993.784274414" lastFinishedPulling="2026-01-23 17:09:25.436241782 +0000 UTC m=+999.436434188" observedRunningTime="2026-01-23 17:09:26.33457078 +0000 UTC m=+1000.334763196" watchObservedRunningTime="2026-01-23 17:09:26.337436342 +0000 UTC m=+1000.337628748" Jan 23 17:09:26 crc kubenswrapper[4606]: I0123 17:09:26.386449 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2661fe67-d059-4581-a333-8957be8402e8" path="/var/lib/kubelet/pods/2661fe67-d059-4581-a333-8957be8402e8/volumes" Jan 23 17:09:26 crc kubenswrapper[4606]: I0123 17:09:26.387018 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" path="/var/lib/kubelet/pods/8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe/volumes" Jan 23 17:09:27 crc kubenswrapper[4606]: I0123 17:09:27.466833 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:27 crc kubenswrapper[4606]: I0123 17:09:27.467057 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.032349 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dc4f4847f-hcqjj"] Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.094193 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b9775877b-mcc82"] Jan 23 17:09:28 crc kubenswrapper[4606]: E0123 17:09:28.094626 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2661fe67-d059-4581-a333-8957be8402e8" containerName="init" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.094640 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2661fe67-d059-4581-a333-8957be8402e8" containerName="init" Jan 23 17:09:28 crc kubenswrapper[4606]: E0123 17:09:28.094659 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" containerName="init" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.094666 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" containerName="init" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.096057 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb9e55f-3383-47a9-9341-dcf8e5eb2dfe" containerName="init" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.096089 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2661fe67-d059-4581-a333-8957be8402e8" containerName="init" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.097184 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.102526 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.111105 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9775877b-mcc82"] Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.195836 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-config-data\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.195889 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdnw2\" (UniqueName: \"kubernetes.io/projected/1dd13584-a1b9-4891-85a7-01afd519f746-kube-api-access-vdnw2\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.195931 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-combined-ca-bundle\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.195945 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd13584-a1b9-4891-85a7-01afd519f746-logs\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.195969 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-secret-key\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.196005 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-tls-certs\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.196027 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-scripts\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.209040 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b7bf8f6df-4w647"] Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.240525 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57977896f6-qrk2s"] Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.244654 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.285136 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57977896f6-qrk2s"] Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.312320 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdnw2\" (UniqueName: \"kubernetes.io/projected/1dd13584-a1b9-4891-85a7-01afd519f746-kube-api-access-vdnw2\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.312376 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-combined-ca-bundle\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.312403 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd13584-a1b9-4891-85a7-01afd519f746-logs\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.312458 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-secret-key\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.312523 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-tls-certs\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.312563 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-scripts\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.312834 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-config-data\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.314029 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-config-data\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.320863 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd13584-a1b9-4891-85a7-01afd519f746-logs\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.325834 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-scripts\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.333355 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-tls-certs\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.333628 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdnw2\" (UniqueName: \"kubernetes.io/projected/1dd13584-a1b9-4891-85a7-01afd519f746-kube-api-access-vdnw2\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.354656 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-combined-ca-bundle\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.394177 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-secret-key\") pod \"horizon-5b9775877b-mcc82\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.418593 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-logs\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.418634 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvhzj\" (UniqueName: \"kubernetes.io/projected/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-kube-api-access-hvhzj\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.418682 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-scripts\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.418725 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-combined-ca-bundle\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.418750 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-horizon-secret-key\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.418780 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-config-data\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.418800 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-horizon-tls-certs\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.434791 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.520865 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-logs\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.520934 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvhzj\" (UniqueName: \"kubernetes.io/projected/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-kube-api-access-hvhzj\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.521009 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-scripts\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.521417 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-logs\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.521880 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-scripts\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.522029 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-combined-ca-bundle\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.522070 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-horizon-secret-key\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.522127 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-config-data\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.522141 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-horizon-tls-certs\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.524037 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-config-data\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.528303 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-combined-ca-bundle\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.528823 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-horizon-secret-key\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.548162 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvhzj\" (UniqueName: \"kubernetes.io/projected/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-kube-api-access-hvhzj\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.554421 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/59fc19de-8c69-43e4-bf0d-1cb8ea9a9120-horizon-tls-certs\") pod \"horizon-57977896f6-qrk2s\" (UID: \"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120\") " pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.608524 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:09:28 crc kubenswrapper[4606]: I0123 17:09:28.638987 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-p8hwf" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="registry-server" probeResult="failure" output=< Jan 23 17:09:28 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 17:09:28 crc kubenswrapper[4606]: > Jan 23 17:09:29 crc kubenswrapper[4606]: I0123 17:09:29.684708 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:09:29 crc kubenswrapper[4606]: I0123 17:09:29.741870 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2brrb"] Jan 23 17:09:29 crc kubenswrapper[4606]: I0123 17:09:29.746856 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-2brrb" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" containerID="cri-o://ff7ce9c8744f26e5614493c694422e9a094ab9e5f86648842927cca884f0226e" gracePeriod=10 Jan 23 17:09:29 crc kubenswrapper[4606]: I0123 17:09:29.934455 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:09:29 crc kubenswrapper[4606]: I0123 17:09:29.934505 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:09:30 crc kubenswrapper[4606]: I0123 17:09:30.408707 4606 generic.go:334] "Generic (PLEG): container finished" podID="3d822de6-0f77-4a88-9538-965bf2079ed9" containerID="cce42ffa99cfa6746f3fe53d9751dd176d42f3b1f3eb90e8bb8736ceda609654" exitCode=0 Jan 23 17:09:30 crc kubenswrapper[4606]: I0123 17:09:30.408946 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v5qb8" event={"ID":"3d822de6-0f77-4a88-9538-965bf2079ed9","Type":"ContainerDied","Data":"cce42ffa99cfa6746f3fe53d9751dd176d42f3b1f3eb90e8bb8736ceda609654"} Jan 23 17:09:30 crc kubenswrapper[4606]: I0123 17:09:30.417798 4606 generic.go:334] "Generic (PLEG): container finished" podID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerID="ff7ce9c8744f26e5614493c694422e9a094ab9e5f86648842927cca884f0226e" exitCode=0 Jan 23 17:09:30 crc kubenswrapper[4606]: I0123 17:09:30.417861 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2brrb" event={"ID":"d6a71de6-abb9-41ea-84ac-37aba42f4743","Type":"ContainerDied","Data":"ff7ce9c8744f26e5614493c694422e9a094ab9e5f86648842927cca884f0226e"} Jan 23 17:09:33 crc kubenswrapper[4606]: I0123 17:09:33.336142 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2brrb" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 23 17:09:37 crc kubenswrapper[4606]: I0123 17:09:37.572834 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:37 crc kubenswrapper[4606]: I0123 17:09:37.641429 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:37 crc kubenswrapper[4606]: I0123 17:09:37.811826 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8hwf"] Jan 23 17:09:38 crc kubenswrapper[4606]: I0123 17:09:38.336016 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2brrb" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 23 17:09:39 crc kubenswrapper[4606]: I0123 17:09:39.500508 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p8hwf" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="registry-server" containerID="cri-o://b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" gracePeriod=2 Jan 23 17:09:40 crc kubenswrapper[4606]: I0123 17:09:40.515881 4606 generic.go:334] "Generic (PLEG): container finished" podID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" exitCode=0 Jan 23 17:09:40 crc kubenswrapper[4606]: I0123 17:09:40.516238 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8hwf" event={"ID":"fd529aec-5f6a-46dd-a04e-f97e47afc754","Type":"ContainerDied","Data":"b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3"} Jan 23 17:09:43 crc kubenswrapper[4606]: I0123 17:09:43.336081 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2brrb" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 23 17:09:43 crc kubenswrapper[4606]: I0123 17:09:43.336456 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:09:43 crc kubenswrapper[4606]: I0123 17:09:43.553105 4606 generic.go:334] "Generic (PLEG): container finished" podID="5b1b5e49-bdba-4463-b025-fc30bedd63f0" containerID="7facb45390c548164fd91699de258f3b275995c36e5391edf94c935531598be3" exitCode=0 Jan 23 17:09:43 crc kubenswrapper[4606]: I0123 17:09:43.553257 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vzbb" event={"ID":"5b1b5e49-bdba-4463-b025-fc30bedd63f0","Type":"ContainerDied","Data":"7facb45390c548164fd91699de258f3b275995c36e5391edf94c935531598be3"} Jan 23 17:09:47 crc kubenswrapper[4606]: E0123 17:09:47.467120 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:47 crc kubenswrapper[4606]: E0123 17:09:47.467895 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:47 crc kubenswrapper[4606]: E0123 17:09:47.468121 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:47 crc kubenswrapper[4606]: E0123 17:09:47.468147 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-p8hwf" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="registry-server" Jan 23 17:09:48 crc kubenswrapper[4606]: I0123 17:09:48.335992 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2brrb" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 23 17:09:48 crc kubenswrapper[4606]: E0123 17:09:48.767528 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 23 17:09:48 crc kubenswrapper[4606]: E0123 17:09:48.767749 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h676h56dh59fh669h8h68ch67fh649h76h575h554hc8h595h99h5f5h57dh687hc6h55fh556h565h668h65bhd9hffh5cch544h54fh647h574h558q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vbkwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7dc4f4847f-hcqjj_openstack(66b867e7-9d48-4c81-b32e-4a8fb26b9f3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:09:48 crc kubenswrapper[4606]: E0123 17:09:48.772843 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7dc4f4847f-hcqjj" podUID="66b867e7-9d48-4c81-b32e-4a8fb26b9f3a" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.336099 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-2brrb" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 23 17:09:53 crc kubenswrapper[4606]: E0123 17:09:53.468892 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 23 17:09:53 crc kubenswrapper[4606]: E0123 17:09:53.469082 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n655h57h587h99h674h55ch5b5h5cch586h567h54bhd4h77h678h646hc8h66ch5ffhc5hddh68bh647h557h684h55h5b4h8bh6dh5cfhfbh589h54q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcwwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-68db5566bf-2m8sd_openstack(ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:09:53 crc kubenswrapper[4606]: E0123 17:09:53.473637 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-68db5566bf-2m8sd" podUID="ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.620319 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.628403 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vzbb" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.643906 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-credential-keys\") pod \"3d822de6-0f77-4a88-9538-965bf2079ed9\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.643995 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-scripts\") pod \"3d822de6-0f77-4a88-9538-965bf2079ed9\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.644046 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-combined-ca-bundle\") pod \"3d822de6-0f77-4a88-9538-965bf2079ed9\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.644079 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6wpp\" (UniqueName: \"kubernetes.io/projected/3d822de6-0f77-4a88-9538-965bf2079ed9-kube-api-access-t6wpp\") pod \"3d822de6-0f77-4a88-9538-965bf2079ed9\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.644184 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-fernet-keys\") pod \"3d822de6-0f77-4a88-9538-965bf2079ed9\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.644210 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-config-data\") pod \"3d822de6-0f77-4a88-9538-965bf2079ed9\" (UID: \"3d822de6-0f77-4a88-9538-965bf2079ed9\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.687036 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d822de6-0f77-4a88-9538-965bf2079ed9-kube-api-access-t6wpp" (OuterVolumeSpecName: "kube-api-access-t6wpp") pod "3d822de6-0f77-4a88-9538-965bf2079ed9" (UID: "3d822de6-0f77-4a88-9538-965bf2079ed9"). InnerVolumeSpecName "kube-api-access-t6wpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.712160 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3d822de6-0f77-4a88-9538-965bf2079ed9" (UID: "3d822de6-0f77-4a88-9538-965bf2079ed9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.712250 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3d822de6-0f77-4a88-9538-965bf2079ed9" (UID: "3d822de6-0f77-4a88-9538-965bf2079ed9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.729310 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-scripts" (OuterVolumeSpecName: "scripts") pod "3d822de6-0f77-4a88-9538-965bf2079ed9" (UID: "3d822de6-0f77-4a88-9538-965bf2079ed9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.737948 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5vzbb" event={"ID":"5b1b5e49-bdba-4463-b025-fc30bedd63f0","Type":"ContainerDied","Data":"da69dd14d0f169169c725c0520effce698bd3506f934a6025cc850139325310f"} Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.737981 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da69dd14d0f169169c725c0520effce698bd3506f934a6025cc850139325310f" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.738039 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5vzbb" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.739047 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d822de6-0f77-4a88-9538-965bf2079ed9" (UID: "3d822de6-0f77-4a88-9538-965bf2079ed9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.747451 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-config-data" (OuterVolumeSpecName: "config-data") pod "3d822de6-0f77-4a88-9538-965bf2079ed9" (UID: "3d822de6-0f77-4a88-9538-965bf2079ed9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.748009 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-db-sync-config-data\") pod \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.748134 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-combined-ca-bundle\") pod \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.748233 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg228\" (UniqueName: \"kubernetes.io/projected/5b1b5e49-bdba-4463-b025-fc30bedd63f0-kube-api-access-wg228\") pod \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.748273 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-config-data\") pod \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\" (UID: \"5b1b5e49-bdba-4463-b025-fc30bedd63f0\") " Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.752411 4606 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.752433 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.752444 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.752453 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6wpp\" (UniqueName: \"kubernetes.io/projected/3d822de6-0f77-4a88-9538-965bf2079ed9-kube-api-access-t6wpp\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.752463 4606 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.752482 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d822de6-0f77-4a88-9538-965bf2079ed9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.756047 4606 generic.go:334] "Generic (PLEG): container finished" podID="fb6d384e-69de-4bd3-bb7b-bf9346614fce" containerID="f5a35f4a31f2db0c926042d0e165f78ba5ced4fc693ed5b73c6b4e0440d6c6bb" exitCode=0 Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.756120 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmv5b" event={"ID":"fb6d384e-69de-4bd3-bb7b-bf9346614fce","Type":"ContainerDied","Data":"f5a35f4a31f2db0c926042d0e165f78ba5ced4fc693ed5b73c6b4e0440d6c6bb"} Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.762938 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-v5qb8" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.764210 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-v5qb8" event={"ID":"3d822de6-0f77-4a88-9538-965bf2079ed9","Type":"ContainerDied","Data":"427c3274cb1bb173059535a0f77301049297117884949f3e5a43e797dca01ed7"} Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.764248 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="427c3274cb1bb173059535a0f77301049297117884949f3e5a43e797dca01ed7" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.772417 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1b5e49-bdba-4463-b025-fc30bedd63f0-kube-api-access-wg228" (OuterVolumeSpecName: "kube-api-access-wg228") pod "5b1b5e49-bdba-4463-b025-fc30bedd63f0" (UID: "5b1b5e49-bdba-4463-b025-fc30bedd63f0"). InnerVolumeSpecName "kube-api-access-wg228". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.795479 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5b1b5e49-bdba-4463-b025-fc30bedd63f0" (UID: "5b1b5e49-bdba-4463-b025-fc30bedd63f0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.811013 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b1b5e49-bdba-4463-b025-fc30bedd63f0" (UID: "5b1b5e49-bdba-4463-b025-fc30bedd63f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.853745 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg228\" (UniqueName: \"kubernetes.io/projected/5b1b5e49-bdba-4463-b025-fc30bedd63f0-kube-api-access-wg228\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.862715 4606 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.864740 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.874873 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-config-data" (OuterVolumeSpecName: "config-data") pod "5b1b5e49-bdba-4463-b025-fc30bedd63f0" (UID: "5b1b5e49-bdba-4463-b025-fc30bedd63f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:53 crc kubenswrapper[4606]: I0123 17:09:53.965952 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1b5e49-bdba-4463-b025-fc30bedd63f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.805648 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-v5qb8"] Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.822389 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-v5qb8"] Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.909112 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-knccn"] Jan 23 17:09:54 crc kubenswrapper[4606]: E0123 17:09:54.909490 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1b5e49-bdba-4463-b025-fc30bedd63f0" containerName="glance-db-sync" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.909513 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1b5e49-bdba-4463-b025-fc30bedd63f0" containerName="glance-db-sync" Jan 23 17:09:54 crc kubenswrapper[4606]: E0123 17:09:54.909546 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d822de6-0f77-4a88-9538-965bf2079ed9" containerName="keystone-bootstrap" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.909556 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d822de6-0f77-4a88-9538-965bf2079ed9" containerName="keystone-bootstrap" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.909769 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d822de6-0f77-4a88-9538-965bf2079ed9" containerName="keystone-bootstrap" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.909805 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1b5e49-bdba-4463-b025-fc30bedd63f0" containerName="glance-db-sync" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.910521 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.920655 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.920864 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.920982 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cxp6x" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.921178 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.921674 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.970670 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-knccn"] Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.989175 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-scripts\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.989223 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-credential-keys\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.989253 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-combined-ca-bundle\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.989289 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-config-data\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.989354 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-strcg\" (UniqueName: \"kubernetes.io/projected/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-kube-api-access-strcg\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:54 crc kubenswrapper[4606]: I0123 17:09:54.989559 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-fernet-keys\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.091112 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-strcg\" (UniqueName: \"kubernetes.io/projected/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-kube-api-access-strcg\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.091213 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-fernet-keys\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.091276 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-scripts\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.091313 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-credential-keys\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.091362 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-combined-ca-bundle\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.091427 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-config-data\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.102020 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-config-data\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.114630 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-combined-ca-bundle\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.115205 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-scripts\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.122306 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-fernet-keys\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.127046 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-strcg\" (UniqueName: \"kubernetes.io/projected/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-kube-api-access-strcg\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.129745 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-credential-keys\") pod \"keystone-bootstrap-knccn\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.235962 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knccn" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.314622 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kdgqp"] Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.316096 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.325087 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kdgqp"] Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.410174 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.410282 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.410330 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-config\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.410352 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.410416 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkxzp\" (UniqueName: \"kubernetes.io/projected/bb42cf7b-ed1b-43bb-855b-348b670377b1-kube-api-access-gkxzp\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.410442 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.512348 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.512451 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.512492 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-config\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.512521 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.512598 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkxzp\" (UniqueName: \"kubernetes.io/projected/bb42cf7b-ed1b-43bb-855b-348b670377b1-kube-api-access-gkxzp\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.512635 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.513280 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.513920 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.514381 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-config\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.514395 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.514760 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.540089 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkxzp\" (UniqueName: \"kubernetes.io/projected/bb42cf7b-ed1b-43bb-855b-348b670377b1-kube-api-access-gkxzp\") pod \"dnsmasq-dns-785d8bcb8c-kdgqp\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:55 crc kubenswrapper[4606]: I0123 17:09:55.654307 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:09:56 crc kubenswrapper[4606]: E0123 17:09:56.168914 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 23 17:09:56 crc kubenswrapper[4606]: E0123 17:09:56.169062 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfdgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tjf95_openstack(3f5144cd-47f2-4ce5-81d6-68884dbfc739): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:09:56 crc kubenswrapper[4606]: E0123 17:09:56.170380 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tjf95" podUID="3f5144cd-47f2-4ce5-81d6-68884dbfc739" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.183928 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.187175 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.194300 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wjzzh" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.195021 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.195261 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.207640 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.242077 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.330644 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbkwm\" (UniqueName: \"kubernetes.io/projected/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-kube-api-access-vbkwm\") pod \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.331026 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-scripts\") pod \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.331235 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-logs\") pod \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.331530 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-horizon-secret-key\") pod \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.331671 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-config-data\") pod \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\" (UID: \"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a\") " Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.332052 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-scripts\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.331685 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-logs" (OuterVolumeSpecName: "logs") pod "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a" (UID: "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.331730 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-scripts" (OuterVolumeSpecName: "scripts") pod "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a" (UID: "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.332242 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-config-data" (OuterVolumeSpecName: "config-data") pod "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a" (UID: "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.332459 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwg2\" (UniqueName: \"kubernetes.io/projected/a60d6331-f97c-4fa2-9b03-1bb41fee8588-kube-api-access-5zwg2\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.332639 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.332788 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-config-data\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.333214 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.333325 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-logs\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.333480 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.335860 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.335962 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.336047 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.346174 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-kube-api-access-vbkwm" (OuterVolumeSpecName: "kube-api-access-vbkwm") pod "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a" (UID: "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a"). InnerVolumeSpecName "kube-api-access-vbkwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.353699 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a" (UID: "66b867e7-9d48-4c81-b32e-4a8fb26b9f3a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.394128 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d822de6-0f77-4a88-9538-965bf2079ed9" path="/var/lib/kubelet/pods/3d822de6-0f77-4a88-9538-965bf2079ed9/volumes" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.442466 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-logs\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.442675 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.442846 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-scripts\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.443006 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwg2\" (UniqueName: \"kubernetes.io/projected/a60d6331-f97c-4fa2-9b03-1bb41fee8588-kube-api-access-5zwg2\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.443099 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.443190 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-config-data\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.443241 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.443307 4606 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.443329 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbkwm\" (UniqueName: \"kubernetes.io/projected/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a-kube-api-access-vbkwm\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.443687 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.447915 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.448324 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-logs\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.458073 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-config-data\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.462880 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-scripts\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.465012 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.473287 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwg2\" (UniqueName: \"kubernetes.io/projected/a60d6331-f97c-4fa2-9b03-1bb41fee8588-kube-api-access-5zwg2\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.509295 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.512309 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.513911 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.523191 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.566980 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.599814 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.647058 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.647154 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.647215 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwvv\" (UniqueName: \"kubernetes.io/projected/aac377c7-412a-4cea-850d-817045a67cc5-kube-api-access-jjwvv\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.647287 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.647305 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.647366 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-logs\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.647391 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749222 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749282 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-logs\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749310 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749392 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749429 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749450 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjwvv\" (UniqueName: \"kubernetes.io/projected/aac377c7-412a-4cea-850d-817045a67cc5-kube-api-access-jjwvv\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749479 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.749513 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.750137 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.750451 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-logs\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.764638 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.785605 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.786076 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.789057 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjwvv\" (UniqueName: \"kubernetes.io/projected/aac377c7-412a-4cea-850d-817045a67cc5-kube-api-access-jjwvv\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.844342 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.876995 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dc4f4847f-hcqjj" Jan 23 17:09:56 crc kubenswrapper[4606]: I0123 17:09:56.889458 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dc4f4847f-hcqjj" event={"ID":"66b867e7-9d48-4c81-b32e-4a8fb26b9f3a","Type":"ContainerDied","Data":"6ce1984899706ad70e3ff27b2e2a5bc29bb7cc8bc656f080266155ee19515eed"} Jan 23 17:09:56 crc kubenswrapper[4606]: E0123 17:09:56.891011 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tjf95" podUID="3f5144cd-47f2-4ce5-81d6-68884dbfc739" Jan 23 17:09:57 crc kubenswrapper[4606]: I0123 17:09:57.016657 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dc4f4847f-hcqjj"] Jan 23 17:09:57 crc kubenswrapper[4606]: I0123 17:09:57.038175 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7dc4f4847f-hcqjj"] Jan 23 17:09:57 crc kubenswrapper[4606]: I0123 17:09:57.139055 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.468684 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.469352 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.469734 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.469782 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-p8hwf" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="registry-server" Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.545833 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.545989 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sf67b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lzz5x_openstack(4cb98288-9db7-4a8f-9b1d-1d57a547062b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.547163 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lzz5x" podUID="4cb98288-9db7-4a8f-9b1d-1d57a547062b" Jan 23 17:09:57 crc kubenswrapper[4606]: E0123 17:09:57.886295 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lzz5x" podUID="4cb98288-9db7-4a8f-9b1d-1d57a547062b" Jan 23 17:09:58 crc kubenswrapper[4606]: E0123 17:09:58.002025 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 23 17:09:58 crc kubenswrapper[4606]: E0123 17:09:58.002212 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb8h55dh699h5bch5b7h646hb5h56bh67fh64fh548h8h64ch5c9h5fbh86h687h9bh697h5b8h599h96h87h575h565h654h666h8h9bh5b6h4h567q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2xhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a5feb913-77c6-425c-9018-de42010da64e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.152227 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.181798 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.183159 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zspbt\" (UniqueName: \"kubernetes.io/projected/d6a71de6-abb9-41ea-84ac-37aba42f4743-kube-api-access-zspbt\") pod \"d6a71de6-abb9-41ea-84ac-37aba42f4743\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.183215 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-dns-svc\") pod \"d6a71de6-abb9-41ea-84ac-37aba42f4743\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.183285 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-nb\") pod \"d6a71de6-abb9-41ea-84ac-37aba42f4743\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.183315 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-sb\") pod \"d6a71de6-abb9-41ea-84ac-37aba42f4743\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.183357 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-config\") pod \"d6a71de6-abb9-41ea-84ac-37aba42f4743\" (UID: \"d6a71de6-abb9-41ea-84ac-37aba42f4743\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.210293 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.220306 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a71de6-abb9-41ea-84ac-37aba42f4743-kube-api-access-zspbt" (OuterVolumeSpecName: "kube-api-access-zspbt") pod "d6a71de6-abb9-41ea-84ac-37aba42f4743" (UID: "d6a71de6-abb9-41ea-84ac-37aba42f4743"). InnerVolumeSpecName "kube-api-access-zspbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287452 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-utilities\") pod \"fd529aec-5f6a-46dd-a04e-f97e47afc754\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287511 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-catalog-content\") pod \"fd529aec-5f6a-46dd-a04e-f97e47afc754\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287565 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-scripts\") pod \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287681 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vd2b\" (UniqueName: \"kubernetes.io/projected/fd529aec-5f6a-46dd-a04e-f97e47afc754-kube-api-access-6vd2b\") pod \"fd529aec-5f6a-46dd-a04e-f97e47afc754\" (UID: \"fd529aec-5f6a-46dd-a04e-f97e47afc754\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287761 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-horizon-secret-key\") pod \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287783 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-config-data\") pod \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287853 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcwwz\" (UniqueName: \"kubernetes.io/projected/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-kube-api-access-hcwwz\") pod \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.287879 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-logs\") pod \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\" (UID: \"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.288353 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zspbt\" (UniqueName: \"kubernetes.io/projected/d6a71de6-abb9-41ea-84ac-37aba42f4743-kube-api-access-zspbt\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.290415 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-utilities" (OuterVolumeSpecName: "utilities") pod "fd529aec-5f6a-46dd-a04e-f97e47afc754" (UID: "fd529aec-5f6a-46dd-a04e-f97e47afc754"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.293080 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-scripts" (OuterVolumeSpecName: "scripts") pod "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c" (UID: "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.293696 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-config-data" (OuterVolumeSpecName: "config-data") pod "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c" (UID: "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.299106 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-logs" (OuterVolumeSpecName: "logs") pod "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c" (UID: "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.302413 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.321879 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd529aec-5f6a-46dd-a04e-f97e47afc754" (UID: "fd529aec-5f6a-46dd-a04e-f97e47afc754"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.335332 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd529aec-5f6a-46dd-a04e-f97e47afc754-kube-api-access-6vd2b" (OuterVolumeSpecName: "kube-api-access-6vd2b") pod "fd529aec-5f6a-46dd-a04e-f97e47afc754" (UID: "fd529aec-5f6a-46dd-a04e-f97e47afc754"). InnerVolumeSpecName "kube-api-access-6vd2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.382780 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b867e7-9d48-4c81-b32e-4a8fb26b9f3a" path="/var/lib/kubelet/pods/66b867e7-9d48-4c81-b32e-4a8fb26b9f3a/volumes" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.390487 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2b8w\" (UniqueName: \"kubernetes.io/projected/fb6d384e-69de-4bd3-bb7b-bf9346614fce-kube-api-access-f2b8w\") pod \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.390776 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-config\") pod \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.390881 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-combined-ca-bundle\") pod \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\" (UID: \"fb6d384e-69de-4bd3-bb7b-bf9346614fce\") " Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391396 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-kube-api-access-hcwwz" (OuterVolumeSpecName: "kube-api-access-hcwwz") pod "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c" (UID: "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c"). InnerVolumeSpecName "kube-api-access-hcwwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391576 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391603 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd529aec-5f6a-46dd-a04e-f97e47afc754-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391615 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391631 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vd2b\" (UniqueName: \"kubernetes.io/projected/fd529aec-5f6a-46dd-a04e-f97e47afc754-kube-api-access-6vd2b\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391640 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391648 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcwwz\" (UniqueName: \"kubernetes.io/projected/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-kube-api-access-hcwwz\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.391656 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.397776 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c" (UID: "ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.406499 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6d384e-69de-4bd3-bb7b-bf9346614fce-kube-api-access-f2b8w" (OuterVolumeSpecName: "kube-api-access-f2b8w") pod "fb6d384e-69de-4bd3-bb7b-bf9346614fce" (UID: "fb6d384e-69de-4bd3-bb7b-bf9346614fce"). InnerVolumeSpecName "kube-api-access-f2b8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.450831 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.467158 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.475785 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6a71de6-abb9-41ea-84ac-37aba42f4743" (UID: "d6a71de6-abb9-41ea-84ac-37aba42f4743"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.482697 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6a71de6-abb9-41ea-84ac-37aba42f4743" (UID: "d6a71de6-abb9-41ea-84ac-37aba42f4743"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.494516 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2b8w\" (UniqueName: \"kubernetes.io/projected/fb6d384e-69de-4bd3-bb7b-bf9346614fce-kube-api-access-f2b8w\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.508702 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.527766 4606 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.527863 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.495363 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb6d384e-69de-4bd3-bb7b-bf9346614fce" (UID: "fb6d384e-69de-4bd3-bb7b-bf9346614fce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.564137 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-config" (OuterVolumeSpecName: "config") pod "fb6d384e-69de-4bd3-bb7b-bf9346614fce" (UID: "fb6d384e-69de-4bd3-bb7b-bf9346614fce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.571011 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6a71de6-abb9-41ea-84ac-37aba42f4743" (UID: "d6a71de6-abb9-41ea-84ac-37aba42f4743"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.609698 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-config" (OuterVolumeSpecName: "config") pod "d6a71de6-abb9-41ea-84ac-37aba42f4743" (UID: "d6a71de6-abb9-41ea-84ac-37aba42f4743"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.629605 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.629642 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.629666 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb6d384e-69de-4bd3-bb7b-bf9346614fce-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.629679 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a71de6-abb9-41ea-84ac-37aba42f4743-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:09:58 crc kubenswrapper[4606]: I0123 17:09:58.759123 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b9775877b-mcc82"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.898903 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68db5566bf-2m8sd" event={"ID":"ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c","Type":"ContainerDied","Data":"ff5a3b7203ba5d72786e5ac0941df9daa6f8e3903bdd11939eac9774672e5641"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.899300 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68db5566bf-2m8sd" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.918507 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p8hwf" event={"ID":"fd529aec-5f6a-46dd-a04e-f97e47afc754","Type":"ContainerDied","Data":"eddab6abc7567c0bb72346f8b002371a92a724021461f4c02f01e087e0c133f0"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.918556 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p8hwf" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.918569 4606 scope.go:117] "RemoveContainer" containerID="b4964a3a6bce500a8ba385cec3b81651c48504e67e10709beb116786c1bd11d3" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.922207 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerStarted","Data":"b71fc2149df3ad2cbe56ff9d1d4f28f2fd41ef9e70a02a90abf73df172177bba"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.929919 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmv5b" event={"ID":"fb6d384e-69de-4bd3-bb7b-bf9346614fce","Type":"ContainerDied","Data":"59451d410bba10279b24553faef92a7c8fff3f7d8ded7f45cf3e5f85b7dc13c2"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.929957 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59451d410bba10279b24553faef92a7c8fff3f7d8ded7f45cf3e5f85b7dc13c2" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.930032 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmv5b" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.933321 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-2brrb" event={"ID":"d6a71de6-abb9-41ea-84ac-37aba42f4743","Type":"ContainerDied","Data":"59a812883bbb036319fbf5df0cde23a5d01cccf44033335b2cf4f0592b300cd4"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:58.933392 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-2brrb" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.031667 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68db5566bf-2m8sd"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.043308 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68db5566bf-2m8sd"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.070534 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2brrb"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.087627 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-2brrb"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.093984 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8hwf"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.110525 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p8hwf"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.110851 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57977896f6-qrk2s"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.111770 4606 scope.go:117] "RemoveContainer" containerID="cd5dd3c45ec22e46b95d137ac579ca7048641f3df0d73fa6138b1ea57ff15880" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.222135 4606 scope.go:117] "RemoveContainer" containerID="ad9a0843f01fba1c0234e4ce522bb7192d5f99bb85726730c6c84b549ea51b0d" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.275674 4606 scope.go:117] "RemoveContainer" containerID="ff7ce9c8744f26e5614493c694422e9a094ab9e5f86648842927cca884f0226e" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.347735 4606 scope.go:117] "RemoveContainer" containerID="06e9736e4e65e47f907f4fb56e3fa37662afa945fed57db4cf8058cd30f7f56d" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.499908 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kdgqp"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.531815 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5gbjl"] Jan 23 17:10:02 crc kubenswrapper[4606]: E0123 17:09:59.532152 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6d384e-69de-4bd3-bb7b-bf9346614fce" containerName="neutron-db-sync" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532168 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6d384e-69de-4bd3-bb7b-bf9346614fce" containerName="neutron-db-sync" Jan 23 17:10:02 crc kubenswrapper[4606]: E0123 17:09:59.532192 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="extract-utilities" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532198 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="extract-utilities" Jan 23 17:10:02 crc kubenswrapper[4606]: E0123 17:09:59.532211 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532217 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" Jan 23 17:10:02 crc kubenswrapper[4606]: E0123 17:09:59.532230 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="extract-content" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532236 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="extract-content" Jan 23 17:10:02 crc kubenswrapper[4606]: E0123 17:09:59.532249 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="init" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532255 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="init" Jan 23 17:10:02 crc kubenswrapper[4606]: E0123 17:09:59.532263 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="registry-server" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532269 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="registry-server" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532520 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6d384e-69de-4bd3-bb7b-bf9346614fce" containerName="neutron-db-sync" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532544 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" containerName="registry-server" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.532554 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" containerName="dnsmasq-dns" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.533430 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.556637 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5gbjl"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.654618 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.654737 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snld4\" (UniqueName: \"kubernetes.io/projected/c747544b-d43a-46bf-8b8a-dfab974a8788-kube-api-access-snld4\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.654791 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.654857 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.654910 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-config\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.654937 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.755748 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.755858 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snld4\" (UniqueName: \"kubernetes.io/projected/c747544b-d43a-46bf-8b8a-dfab974a8788-kube-api-access-snld4\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.755902 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.755942 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.755993 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-config\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.756015 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.756751 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-config\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.757088 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.757106 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.757259 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.757289 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.777175 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snld4\" (UniqueName: \"kubernetes.io/projected/c747544b-d43a-46bf-8b8a-dfab974a8788-kube-api-access-snld4\") pod \"dnsmasq-dns-55f844cf75-5gbjl\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.833491 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fc6b8db9b-nkqzh"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.834853 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.839942 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.840523 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ffzkl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.841817 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.847015 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.851841 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fc6b8db9b-nkqzh"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.857829 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpw6v\" (UniqueName: \"kubernetes.io/projected/7e867f98-5330-473c-aa2f-4c5863e70688-kube-api-access-gpw6v\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.857863 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-httpd-config\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.857939 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-ovndb-tls-certs\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.857961 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-config\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.857992 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-combined-ca-bundle\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.885010 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.933869 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.934347 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.951900 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977896f6-qrk2s" event={"ID":"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120","Type":"ContainerStarted","Data":"49ebcc4fef0cd6988ff3aeb9fda2fd0e9f3e5f914e7676c0640b21a95b43eb88"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.960984 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpw6v\" (UniqueName: \"kubernetes.io/projected/7e867f98-5330-473c-aa2f-4c5863e70688-kube-api-access-gpw6v\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.961050 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-httpd-config\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.961112 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-ovndb-tls-certs\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.961145 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-config\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.961199 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-combined-ca-bundle\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.965106 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-ovndb-tls-certs\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.965201 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-combined-ca-bundle\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.968019 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-config\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.973681 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-httpd-config\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:09:59.979960 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpw6v\" (UniqueName: \"kubernetes.io/projected/7e867f98-5330-473c-aa2f-4c5863e70688-kube-api-access-gpw6v\") pod \"neutron-6fc6b8db9b-nkqzh\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:00.160622 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:00.387872 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c" path="/var/lib/kubelet/pods/ac1a9d1a-6b6e-41cd-9fbe-382358fb4e4c/volumes" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:00.388519 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a71de6-abb9-41ea-84ac-37aba42f4743" path="/var/lib/kubelet/pods/d6a71de6-abb9-41ea-84ac-37aba42f4743/volumes" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:00.389300 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd529aec-5f6a-46dd-a04e-f97e47afc754" path="/var/lib/kubelet/pods/fd529aec-5f6a-46dd-a04e-f97e47afc754/volumes" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:00.961921 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977896f6-qrk2s" event={"ID":"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120","Type":"ContainerStarted","Data":"e5440563724c08de04622d9fc01dafcb3181b67dbc36508b5bb413c8d0e43d40"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.000864 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfvh4" event={"ID":"139f2d63-a754-454b-8fe5-fb33c30ec2bf","Type":"ContainerStarted","Data":"bfd49bb10524308a057dddd25e9a9a4339af84dcd8a64c4cffe93ed05d806a38"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.013343 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerStarted","Data":"a7070e36597fcdf8aa3834affbe73a3753b346f31476a5736d178e186a804eb1"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.016574 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54675cc74f-8dftn"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.021902 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.033215 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7bf8f6df-4w647" event={"ID":"e23f5510-b5f6-42a1-b751-caa502fba254","Type":"ContainerStarted","Data":"73dd298dd581b5c8c71be4bcd7a660571fdcf22b16ce794580fc0b943c63328a"} Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.034832 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.035101 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.048613 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54675cc74f-8dftn"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.053139 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bfvh4" podStartSLOduration=7.636905995 podStartE2EDuration="44.053075541s" podCreationTimestamp="2026-01-23 17:09:18 +0000 UTC" firstStartedPulling="2026-01-23 17:09:21.109384312 +0000 UTC m=+995.109576718" lastFinishedPulling="2026-01-23 17:09:57.525553868 +0000 UTC m=+1031.525746264" observedRunningTime="2026-01-23 17:10:02.03377217 +0000 UTC m=+1036.033964576" watchObservedRunningTime="2026-01-23 17:10:02.053075541 +0000 UTC m=+1036.053267947" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.155454 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-config\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.155506 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-internal-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.155550 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-public-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.155647 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-httpd-config\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.157786 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-combined-ca-bundle\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.157932 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm7tr\" (UniqueName: \"kubernetes.io/projected/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-kube-api-access-dm7tr\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.157989 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-ovndb-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.259259 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-combined-ca-bundle\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.259354 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm7tr\" (UniqueName: \"kubernetes.io/projected/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-kube-api-access-dm7tr\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.259396 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-ovndb-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.259437 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-config\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.259461 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-internal-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.259496 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-public-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.259535 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-httpd-config\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.266616 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-ovndb-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.271001 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-internal-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.272056 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-combined-ca-bundle\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.272799 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-httpd-config\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.274189 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-public-tls-certs\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.275147 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-config\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.285963 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm7tr\" (UniqueName: \"kubernetes.io/projected/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-kube-api-access-dm7tr\") pod \"neutron-54675cc74f-8dftn\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.359300 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.674192 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kdgqp"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.725041 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:10:02 crc kubenswrapper[4606]: I0123 17:10:02.845274 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.017839 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-knccn"] Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.044401 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5gbjl"] Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.049792 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7bf8f6df-4w647" event={"ID":"e23f5510-b5f6-42a1-b751-caa502fba254","Type":"ContainerStarted","Data":"a4544f2d7c715ef9f85ecb2a81fd369d6138639b3297403e2e4df3d4e4b1471a"} Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.050017 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b7bf8f6df-4w647" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon-log" containerID="cri-o://73dd298dd581b5c8c71be4bcd7a660571fdcf22b16ce794580fc0b943c63328a" gracePeriod=30 Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.050091 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b7bf8f6df-4w647" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon" containerID="cri-o://a4544f2d7c715ef9f85ecb2a81fd369d6138639b3297403e2e4df3d4e4b1471a" gracePeriod=30 Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.053914 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977896f6-qrk2s" event={"ID":"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120","Type":"ContainerStarted","Data":"ee2e80c065ffe14a9551fa689c4481e4792330ef9d6af9801521b59f7173bb20"} Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.055542 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" event={"ID":"bb42cf7b-ed1b-43bb-855b-348b670377b1","Type":"ContainerStarted","Data":"473e95d2a017922aec5ac39eba582945a03bcbfbb595dd535b10a4ad8463b9d1"} Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.057846 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aac377c7-412a-4cea-850d-817045a67cc5","Type":"ContainerStarted","Data":"bbac6b467c157056568f38ca135c9a8f7ccd74fa3ee4a5354a2e2c1397733650"} Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.071618 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b7bf8f6df-4w647" podStartSLOduration=7.503455977 podStartE2EDuration="42.071561905s" podCreationTimestamp="2026-01-23 17:09:21 +0000 UTC" firstStartedPulling="2026-01-23 17:09:23.455456346 +0000 UTC m=+997.455648752" lastFinishedPulling="2026-01-23 17:09:58.023562274 +0000 UTC m=+1032.023754680" observedRunningTime="2026-01-23 17:10:03.066783365 +0000 UTC m=+1037.066975791" watchObservedRunningTime="2026-01-23 17:10:03.071561905 +0000 UTC m=+1037.071754331" Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.083063 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerStarted","Data":"3acaf29972f41a7b4f105c72ea9d73e660deeda3e9c45ee06e8af4bfd1e970f7"} Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.113674 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fc6b8db9b-nkqzh"] Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.121738 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57977896f6-qrk2s" podStartSLOduration=35.121715276 podStartE2EDuration="35.121715276s" podCreationTimestamp="2026-01-23 17:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:03.09781467 +0000 UTC m=+1037.098007086" watchObservedRunningTime="2026-01-23 17:10:03.121715276 +0000 UTC m=+1037.121907682" Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.139321 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5b9775877b-mcc82" podStartSLOduration=35.139293943 podStartE2EDuration="35.139293943s" podCreationTimestamp="2026-01-23 17:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:03.123609362 +0000 UTC m=+1037.123801778" watchObservedRunningTime="2026-01-23 17:10:03.139293943 +0000 UTC m=+1037.139486359" Jan 23 17:10:03 crc kubenswrapper[4606]: I0123 17:10:03.233307 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54675cc74f-8dftn"] Jan 23 17:10:03 crc kubenswrapper[4606]: W0123 17:10:03.519509 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc747544b_d43a_46bf_8b8a_dfab974a8788.slice/crio-0be1338f38a201c3a0304fcb6d25b7777711a7fa6fb1ac84d095218e5474f2fc WatchSource:0}: Error finding container 0be1338f38a201c3a0304fcb6d25b7777711a7fa6fb1ac84d095218e5474f2fc: Status 404 returned error can't find the container with id 0be1338f38a201c3a0304fcb6d25b7777711a7fa6fb1ac84d095218e5474f2fc Jan 23 17:10:03 crc kubenswrapper[4606]: W0123 17:10:03.521562 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda60d6331_f97c_4fa2_9b03_1bb41fee8588.slice/crio-c1a970eae489920f5187a255d4f7383fb0fa6024e5b5cb0c14f36788f02bf282 WatchSource:0}: Error finding container c1a970eae489920f5187a255d4f7383fb0fa6024e5b5cb0c14f36788f02bf282: Status 404 returned error can't find the container with id c1a970eae489920f5187a255d4f7383fb0fa6024e5b5cb0c14f36788f02bf282 Jan 23 17:10:03 crc kubenswrapper[4606]: W0123 17:10:03.523755 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e867f98_5330_473c_aa2f_4c5863e70688.slice/crio-e6341f7ae4ca8d4b7e12e5f51f5a9fd5be7c35d7b6fff1d53ccb934231f65567 WatchSource:0}: Error finding container e6341f7ae4ca8d4b7e12e5f51f5a9fd5be7c35d7b6fff1d53ccb934231f65567: Status 404 returned error can't find the container with id e6341f7ae4ca8d4b7e12e5f51f5a9fd5be7c35d7b6fff1d53ccb934231f65567 Jan 23 17:10:03 crc kubenswrapper[4606]: W0123 17:10:03.533416 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fcf1ef4_b045_4fe1_9fd0_3d81d3bd8d1f.slice/crio-a0744c4b87a50179597b9afff4053d137aacc92e1121ee80f22de3154f30f7f4 WatchSource:0}: Error finding container a0744c4b87a50179597b9afff4053d137aacc92e1121ee80f22de3154f30f7f4: Status 404 returned error can't find the container with id a0744c4b87a50179597b9afff4053d137aacc92e1121ee80f22de3154f30f7f4 Jan 23 17:10:04 crc kubenswrapper[4606]: I0123 17:10:04.091251 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" event={"ID":"c747544b-d43a-46bf-8b8a-dfab974a8788","Type":"ContainerStarted","Data":"0be1338f38a201c3a0304fcb6d25b7777711a7fa6fb1ac84d095218e5474f2fc"} Jan 23 17:10:04 crc kubenswrapper[4606]: I0123 17:10:04.093679 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc6b8db9b-nkqzh" event={"ID":"7e867f98-5330-473c-aa2f-4c5863e70688","Type":"ContainerStarted","Data":"e6341f7ae4ca8d4b7e12e5f51f5a9fd5be7c35d7b6fff1d53ccb934231f65567"} Jan 23 17:10:04 crc kubenswrapper[4606]: I0123 17:10:04.095068 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a60d6331-f97c-4fa2-9b03-1bb41fee8588","Type":"ContainerStarted","Data":"c1a970eae489920f5187a255d4f7383fb0fa6024e5b5cb0c14f36788f02bf282"} Jan 23 17:10:04 crc kubenswrapper[4606]: I0123 17:10:04.096297 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knccn" event={"ID":"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f","Type":"ContainerStarted","Data":"a0744c4b87a50179597b9afff4053d137aacc92e1121ee80f22de3154f30f7f4"} Jan 23 17:10:04 crc kubenswrapper[4606]: I0123 17:10:04.097442 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54675cc74f-8dftn" event={"ID":"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea","Type":"ContainerStarted","Data":"890d5da34653f4e61a71e9dd04a56cdc71bf870f9fa6c703915e380c1fefe4b4"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.124611 4606 generic.go:334] "Generic (PLEG): container finished" podID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerID="b7fbbc090f1c121453cf9a1eed2f5beb80e10ed6d5775dcc94167040f94b078d" exitCode=0 Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.125836 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" event={"ID":"c747544b-d43a-46bf-8b8a-dfab974a8788","Type":"ContainerDied","Data":"b7fbbc090f1c121453cf9a1eed2f5beb80e10ed6d5775dcc94167040f94b078d"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.135328 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc6b8db9b-nkqzh" event={"ID":"7e867f98-5330-473c-aa2f-4c5863e70688","Type":"ContainerStarted","Data":"bba25af25e90eff5fd8683e770b611e570b0647ea5ba8d01659f310ddfb83cf3"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.135539 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc6b8db9b-nkqzh" event={"ID":"7e867f98-5330-473c-aa2f-4c5863e70688","Type":"ContainerStarted","Data":"193e5a950c77c71b0baa7b145b272491c50b6361f2795946740d3b995aa28fbc"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.136252 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.140737 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a60d6331-f97c-4fa2-9b03-1bb41fee8588","Type":"ContainerStarted","Data":"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.145064 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aac377c7-412a-4cea-850d-817045a67cc5","Type":"ContainerStarted","Data":"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.162184 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knccn" event={"ID":"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f","Type":"ContainerStarted","Data":"8c97855e228239eb4102ba560d1b1d4c16dfd13eabe648f22a0475e5746e2e17"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.181239 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fc6b8db9b-nkqzh" podStartSLOduration=7.181222072 podStartE2EDuration="7.181222072s" podCreationTimestamp="2026-01-23 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:06.171084556 +0000 UTC m=+1040.171276962" watchObservedRunningTime="2026-01-23 17:10:06.181222072 +0000 UTC m=+1040.181414478" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.186895 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54675cc74f-8dftn" event={"ID":"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea","Type":"ContainerStarted","Data":"1096d81101b53aee4e2489b7c87c329beaacc44735756410914bf61b364938a4"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.186949 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54675cc74f-8dftn" event={"ID":"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea","Type":"ContainerStarted","Data":"9ef667d4c1e72c38d3bcdf6cd6e1616b6b782e1b711e8b6ee1035a1f64e4da0c"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.188145 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.200238 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-knccn" podStartSLOduration=12.20022194 podStartE2EDuration="12.20022194s" podCreationTimestamp="2026-01-23 17:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:06.187806557 +0000 UTC m=+1040.187998953" watchObservedRunningTime="2026-01-23 17:10:06.20022194 +0000 UTC m=+1040.200414346" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.206877 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerStarted","Data":"854209c6497f5a4fc7f4f7af90cc7a635a38c412d8d463aa2920e40c72f8e0d7"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.218027 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54675cc74f-8dftn" podStartSLOduration=5.218009318 podStartE2EDuration="5.218009318s" podCreationTimestamp="2026-01-23 17:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:06.21014022 +0000 UTC m=+1040.210332636" watchObservedRunningTime="2026-01-23 17:10:06.218009318 +0000 UTC m=+1040.218201724" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.223842 4606 generic.go:334] "Generic (PLEG): container finished" podID="bb42cf7b-ed1b-43bb-855b-348b670377b1" containerID="3c3094d4059b194bfe65e03a47d2202da8bf4921b3b613ba6e6e5e390f716d27" exitCode=0 Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.223897 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" event={"ID":"bb42cf7b-ed1b-43bb-855b-348b670377b1","Type":"ContainerDied","Data":"3c3094d4059b194bfe65e03a47d2202da8bf4921b3b613ba6e6e5e390f716d27"} Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.790386 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.885643 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-svc\") pod \"bb42cf7b-ed1b-43bb-855b-348b670377b1\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.885815 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-swift-storage-0\") pod \"bb42cf7b-ed1b-43bb-855b-348b670377b1\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.885935 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-config\") pod \"bb42cf7b-ed1b-43bb-855b-348b670377b1\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.885980 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkxzp\" (UniqueName: \"kubernetes.io/projected/bb42cf7b-ed1b-43bb-855b-348b670377b1-kube-api-access-gkxzp\") pod \"bb42cf7b-ed1b-43bb-855b-348b670377b1\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.886007 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-sb\") pod \"bb42cf7b-ed1b-43bb-855b-348b670377b1\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.886087 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-nb\") pod \"bb42cf7b-ed1b-43bb-855b-348b670377b1\" (UID: \"bb42cf7b-ed1b-43bb-855b-348b670377b1\") " Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.916720 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb42cf7b-ed1b-43bb-855b-348b670377b1-kube-api-access-gkxzp" (OuterVolumeSpecName: "kube-api-access-gkxzp") pod "bb42cf7b-ed1b-43bb-855b-348b670377b1" (UID: "bb42cf7b-ed1b-43bb-855b-348b670377b1"). InnerVolumeSpecName "kube-api-access-gkxzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.918566 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb42cf7b-ed1b-43bb-855b-348b670377b1" (UID: "bb42cf7b-ed1b-43bb-855b-348b670377b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.961802 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb42cf7b-ed1b-43bb-855b-348b670377b1" (UID: "bb42cf7b-ed1b-43bb-855b-348b670377b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.995168 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.995200 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkxzp\" (UniqueName: \"kubernetes.io/projected/bb42cf7b-ed1b-43bb-855b-348b670377b1-kube-api-access-gkxzp\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:06 crc kubenswrapper[4606]: I0123 17:10:06.995210 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.014377 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb42cf7b-ed1b-43bb-855b-348b670377b1" (UID: "bb42cf7b-ed1b-43bb-855b-348b670377b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.015735 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bb42cf7b-ed1b-43bb-855b-348b670377b1" (UID: "bb42cf7b-ed1b-43bb-855b-348b670377b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.026987 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-config" (OuterVolumeSpecName: "config") pod "bb42cf7b-ed1b-43bb-855b-348b670377b1" (UID: "bb42cf7b-ed1b-43bb-855b-348b670377b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.097605 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.097634 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.097644 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb42cf7b-ed1b-43bb-855b-348b670377b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.241023 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a60d6331-f97c-4fa2-9b03-1bb41fee8588","Type":"ContainerStarted","Data":"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb"} Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.241138 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-log" containerID="cri-o://b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63" gracePeriod=30 Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.241240 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-httpd" containerID="cri-o://fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb" gracePeriod=30 Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.251000 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aac377c7-412a-4cea-850d-817045a67cc5","Type":"ContainerStarted","Data":"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5"} Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.251370 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-log" containerID="cri-o://2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152" gracePeriod=30 Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.251575 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-httpd" containerID="cri-o://be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5" gracePeriod=30 Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.265313 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.265713 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kdgqp" event={"ID":"bb42cf7b-ed1b-43bb-855b-348b670377b1","Type":"ContainerDied","Data":"473e95d2a017922aec5ac39eba582945a03bcbfbb595dd535b10a4ad8463b9d1"} Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.265775 4606 scope.go:117] "RemoveContainer" containerID="3c3094d4059b194bfe65e03a47d2202da8bf4921b3b613ba6e6e5e390f716d27" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.273220 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.273198304 podStartE2EDuration="12.273198304s" podCreationTimestamp="2026-01-23 17:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:07.264959526 +0000 UTC m=+1041.265151932" watchObservedRunningTime="2026-01-23 17:10:07.273198304 +0000 UTC m=+1041.273390710" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.285975 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" event={"ID":"c747544b-d43a-46bf-8b8a-dfab974a8788","Type":"ContainerStarted","Data":"05af7e4c5a62803403e4c83e43f496365e5314ccd0824e748cfb6311601bc723"} Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.286788 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.316434 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.316415181 podStartE2EDuration="12.316415181s" podCreationTimestamp="2026-01-23 17:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:07.301107017 +0000 UTC m=+1041.301299423" watchObservedRunningTime="2026-01-23 17:10:07.316415181 +0000 UTC m=+1041.316607577" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.347521 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" podStartSLOduration=8.347479624 podStartE2EDuration="8.347479624s" podCreationTimestamp="2026-01-23 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:07.33263568 +0000 UTC m=+1041.332828086" watchObservedRunningTime="2026-01-23 17:10:07.347479624 +0000 UTC m=+1041.347672030" Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.392224 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kdgqp"] Jan 23 17:10:07 crc kubenswrapper[4606]: I0123 17:10:07.400267 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kdgqp"] Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.097875 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.230604 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-logs\") pod \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.231367 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zwg2\" (UniqueName: \"kubernetes.io/projected/a60d6331-f97c-4fa2-9b03-1bb41fee8588-kube-api-access-5zwg2\") pod \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.231211 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-logs" (OuterVolumeSpecName: "logs") pod "a60d6331-f97c-4fa2-9b03-1bb41fee8588" (UID: "a60d6331-f97c-4fa2-9b03-1bb41fee8588"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.232017 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-combined-ca-bundle\") pod \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.232069 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.232115 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-scripts\") pod \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.232169 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-config-data\") pod \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.232194 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-httpd-run\") pod \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\" (UID: \"a60d6331-f97c-4fa2-9b03-1bb41fee8588\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.232737 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.233040 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a60d6331-f97c-4fa2-9b03-1bb41fee8588" (UID: "a60d6331-f97c-4fa2-9b03-1bb41fee8588"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.240760 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.257310 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "a60d6331-f97c-4fa2-9b03-1bb41fee8588" (UID: "a60d6331-f97c-4fa2-9b03-1bb41fee8588"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.261161 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60d6331-f97c-4fa2-9b03-1bb41fee8588-kube-api-access-5zwg2" (OuterVolumeSpecName: "kube-api-access-5zwg2") pod "a60d6331-f97c-4fa2-9b03-1bb41fee8588" (UID: "a60d6331-f97c-4fa2-9b03-1bb41fee8588"). InnerVolumeSpecName "kube-api-access-5zwg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.261282 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-scripts" (OuterVolumeSpecName: "scripts") pod "a60d6331-f97c-4fa2-9b03-1bb41fee8588" (UID: "a60d6331-f97c-4fa2-9b03-1bb41fee8588"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.277948 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a60d6331-f97c-4fa2-9b03-1bb41fee8588" (UID: "a60d6331-f97c-4fa2-9b03-1bb41fee8588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.325154 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-config-data" (OuterVolumeSpecName: "config-data") pod "a60d6331-f97c-4fa2-9b03-1bb41fee8588" (UID: "a60d6331-f97c-4fa2-9b03-1bb41fee8588"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.330077 4606 generic.go:334] "Generic (PLEG): container finished" podID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerID="fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb" exitCode=0 Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.330113 4606 generic.go:334] "Generic (PLEG): container finished" podID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerID="b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63" exitCode=143 Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.330168 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a60d6331-f97c-4fa2-9b03-1bb41fee8588","Type":"ContainerDied","Data":"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb"} Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.330220 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a60d6331-f97c-4fa2-9b03-1bb41fee8588","Type":"ContainerDied","Data":"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63"} Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.330239 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a60d6331-f97c-4fa2-9b03-1bb41fee8588","Type":"ContainerDied","Data":"c1a970eae489920f5187a255d4f7383fb0fa6024e5b5cb0c14f36788f02bf282"} Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.330258 4606 scope.go:117] "RemoveContainer" containerID="fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.330382 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.334702 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-scripts\") pod \"aac377c7-412a-4cea-850d-817045a67cc5\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.334776 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-httpd-run\") pod \"aac377c7-412a-4cea-850d-817045a67cc5\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.334824 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-config-data\") pod \"aac377c7-412a-4cea-850d-817045a67cc5\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.334842 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-combined-ca-bundle\") pod \"aac377c7-412a-4cea-850d-817045a67cc5\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.334867 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjwvv\" (UniqueName: \"kubernetes.io/projected/aac377c7-412a-4cea-850d-817045a67cc5-kube-api-access-jjwvv\") pod \"aac377c7-412a-4cea-850d-817045a67cc5\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.334975 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-logs\") pod \"aac377c7-412a-4cea-850d-817045a67cc5\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.335019 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"aac377c7-412a-4cea-850d-817045a67cc5\" (UID: \"aac377c7-412a-4cea-850d-817045a67cc5\") " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.335349 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zwg2\" (UniqueName: \"kubernetes.io/projected/a60d6331-f97c-4fa2-9b03-1bb41fee8588-kube-api-access-5zwg2\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.335360 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.335380 4606 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.335389 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.335399 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a60d6331-f97c-4fa2-9b03-1bb41fee8588-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.335406 4606 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a60d6331-f97c-4fa2-9b03-1bb41fee8588-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.338357 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aac377c7-412a-4cea-850d-817045a67cc5" (UID: "aac377c7-412a-4cea-850d-817045a67cc5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.341525 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-logs" (OuterVolumeSpecName: "logs") pod "aac377c7-412a-4cea-850d-817045a67cc5" (UID: "aac377c7-412a-4cea-850d-817045a67cc5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.343864 4606 generic.go:334] "Generic (PLEG): container finished" podID="139f2d63-a754-454b-8fe5-fb33c30ec2bf" containerID="bfd49bb10524308a057dddd25e9a9a4339af84dcd8a64c4cffe93ed05d806a38" exitCode=0 Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.343865 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-scripts" (OuterVolumeSpecName: "scripts") pod "aac377c7-412a-4cea-850d-817045a67cc5" (UID: "aac377c7-412a-4cea-850d-817045a67cc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.343932 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfvh4" event={"ID":"139f2d63-a754-454b-8fe5-fb33c30ec2bf","Type":"ContainerDied","Data":"bfd49bb10524308a057dddd25e9a9a4339af84dcd8a64c4cffe93ed05d806a38"} Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.353418 4606 generic.go:334] "Generic (PLEG): container finished" podID="aac377c7-412a-4cea-850d-817045a67cc5" containerID="be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5" exitCode=0 Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.353442 4606 generic.go:334] "Generic (PLEG): container finished" podID="aac377c7-412a-4cea-850d-817045a67cc5" containerID="2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152" exitCode=143 Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.356322 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.356478 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aac377c7-412a-4cea-850d-817045a67cc5","Type":"ContainerDied","Data":"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5"} Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.356504 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aac377c7-412a-4cea-850d-817045a67cc5","Type":"ContainerDied","Data":"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152"} Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.356516 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aac377c7-412a-4cea-850d-817045a67cc5","Type":"ContainerDied","Data":"bbac6b467c157056568f38ca135c9a8f7ccd74fa3ee4a5354a2e2c1397733650"} Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.365309 4606 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.365795 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "aac377c7-412a-4cea-850d-817045a67cc5" (UID: "aac377c7-412a-4cea-850d-817045a67cc5"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.380775 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac377c7-412a-4cea-850d-817045a67cc5-kube-api-access-jjwvv" (OuterVolumeSpecName: "kube-api-access-jjwvv") pod "aac377c7-412a-4cea-850d-817045a67cc5" (UID: "aac377c7-412a-4cea-850d-817045a67cc5"). InnerVolumeSpecName "kube-api-access-jjwvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.422398 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb42cf7b-ed1b-43bb-855b-348b670377b1" path="/var/lib/kubelet/pods/bb42cf7b-ed1b-43bb-855b-348b670377b1/volumes" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.431926 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.432160 4606 scope.go:117] "RemoveContainer" containerID="b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.438762 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aac377c7-412a-4cea-850d-817045a67cc5" (UID: "aac377c7-412a-4cea-850d-817045a67cc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.438887 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.444999 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.448578 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.448629 4606 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.448640 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.448648 4606 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aac377c7-412a-4cea-850d-817045a67cc5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.448657 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.448666 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjwvv\" (UniqueName: \"kubernetes.io/projected/aac377c7-412a-4cea-850d-817045a67cc5-kube-api-access-jjwvv\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.448675 4606 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.453868 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.493805 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.494632 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-log" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.494654 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-log" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.494669 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-httpd" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.494676 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-httpd" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.494696 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-log" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.494702 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-log" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.494723 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb42cf7b-ed1b-43bb-855b-348b670377b1" containerName="init" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.494728 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb42cf7b-ed1b-43bb-855b-348b670377b1" containerName="init" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.494753 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-httpd" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.494758 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-httpd" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.495111 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-log" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.495154 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb42cf7b-ed1b-43bb-855b-348b670377b1" containerName="init" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.495179 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac377c7-412a-4cea-850d-817045a67cc5" containerName="glance-httpd" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.495199 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-log" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.495241 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" containerName="glance-httpd" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.495636 4606 scope.go:117] "RemoveContainer" containerID="fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.496761 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.502721 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb\": container with ID starting with fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb not found: ID does not exist" containerID="fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.502764 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb"} err="failed to get container status \"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb\": rpc error: code = NotFound desc = could not find container \"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb\": container with ID starting with fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.502793 4606 scope.go:117] "RemoveContainer" containerID="b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.506376 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.507608 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.507813 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63\": container with ID starting with b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63 not found: ID does not exist" containerID="b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.507844 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63"} err="failed to get container status \"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63\": rpc error: code = NotFound desc = could not find container \"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63\": container with ID starting with b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63 not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.507867 4606 scope.go:117] "RemoveContainer" containerID="fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.508034 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.508196 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb"} err="failed to get container status \"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb\": rpc error: code = NotFound desc = could not find container \"fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb\": container with ID starting with fbda2224424a1b8088cfefecb67bb9fb9bdd27f37f62b8e0cde7aac24edd38cb not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.508216 4606 scope.go:117] "RemoveContainer" containerID="b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.509859 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63"} err="failed to get container status \"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63\": rpc error: code = NotFound desc = could not find container \"b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63\": container with ID starting with b5bce957890340b97bd22b5506f0f32c91e28c179446cb8a28a9f92a58827f63 not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.509889 4606 scope.go:117] "RemoveContainer" containerID="be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.553059 4606 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.554729 4606 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.560042 4606 scope.go:117] "RemoveContainer" containerID="2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.586834 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-config-data" (OuterVolumeSpecName: "config-data") pod "aac377c7-412a-4cea-850d-817045a67cc5" (UID: "aac377c7-412a-4cea-850d-817045a67cc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.595208 4606 scope.go:117] "RemoveContainer" containerID="be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.597536 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5\": container with ID starting with be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5 not found: ID does not exist" containerID="be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.598010 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5"} err="failed to get container status \"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5\": rpc error: code = NotFound desc = could not find container \"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5\": container with ID starting with be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5 not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.598045 4606 scope.go:117] "RemoveContainer" containerID="2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152" Jan 23 17:10:08 crc kubenswrapper[4606]: E0123 17:10:08.601868 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152\": container with ID starting with 2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152 not found: ID does not exist" containerID="2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.601903 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152"} err="failed to get container status \"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152\": rpc error: code = NotFound desc = could not find container \"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152\": container with ID starting with 2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152 not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.601926 4606 scope.go:117] "RemoveContainer" containerID="be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.606777 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5"} err="failed to get container status \"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5\": rpc error: code = NotFound desc = could not find container \"be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5\": container with ID starting with be82fb7ddfc9013f2d2f3b709cee56a315256f721e8fe41e48c579d4b9bbbbf5 not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.606818 4606 scope.go:117] "RemoveContainer" containerID="2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.610666 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152"} err="failed to get container status \"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152\": rpc error: code = NotFound desc = could not find container \"2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152\": container with ID starting with 2603bb7fc171bfe30c04a291c6818ee86e55f5297706fa56410efca24d84d152 not found: ID does not exist" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.614703 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.614834 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.657976 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.658026 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-logs\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.658053 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.658068 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-config-data\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.658113 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2nv7\" (UniqueName: \"kubernetes.io/projected/beedd593-20ce-48a2-a997-3ab8b04c373f-kube-api-access-m2nv7\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.658136 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-scripts\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.658162 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.658177 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.659010 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac377c7-412a-4cea-850d-817045a67cc5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.759988 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.760048 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.760205 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.760237 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-logs\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.760265 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.760287 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-config-data\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.760397 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2nv7\" (UniqueName: \"kubernetes.io/projected/beedd593-20ce-48a2-a997-3ab8b04c373f-kube-api-access-m2nv7\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.760435 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-scripts\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.761379 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.761906 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.762211 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-logs\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.773007 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.774968 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-config-data\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.777614 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.781227 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-scripts\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.785309 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2nv7\" (UniqueName: \"kubernetes.io/projected/beedd593-20ce-48a2-a997-3ab8b04c373f-kube-api-access-m2nv7\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.788801 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.830417 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.917271 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.924452 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.941213 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.942941 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.945755 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.945999 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 17:10:08 crc kubenswrapper[4606]: I0123 17:10:08.958457 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.065936 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.065981 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.066036 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6bgp\" (UniqueName: \"kubernetes.io/projected/6c1be0c7-7122-4d22-abfc-8335101f93d7-kube-api-access-n6bgp\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.066107 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.066134 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.066186 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.066229 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.066251 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168620 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168655 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168733 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6bgp\" (UniqueName: \"kubernetes.io/projected/6c1be0c7-7122-4d22-abfc-8335101f93d7-kube-api-access-n6bgp\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168795 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168810 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168847 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168882 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.168899 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.171427 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.174839 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.176569 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.177555 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.178165 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.186124 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.194232 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.202479 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6bgp\" (UniqueName: \"kubernetes.io/projected/6c1be0c7-7122-4d22-abfc-8335101f93d7-kube-api-access-n6bgp\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.218477 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.265899 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.776408 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:10:09 crc kubenswrapper[4606]: I0123 17:10:09.896820 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.154940 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfvh4" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.299024 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data\") pod \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.299097 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-scripts\") pod \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.299192 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139f2d63-a754-454b-8fe5-fb33c30ec2bf-logs\") pod \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.299233 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-combined-ca-bundle\") pod \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.299277 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fljdw\" (UniqueName: \"kubernetes.io/projected/139f2d63-a754-454b-8fe5-fb33c30ec2bf-kube-api-access-fljdw\") pod \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.300809 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139f2d63-a754-454b-8fe5-fb33c30ec2bf-logs" (OuterVolumeSpecName: "logs") pod "139f2d63-a754-454b-8fe5-fb33c30ec2bf" (UID: "139f2d63-a754-454b-8fe5-fb33c30ec2bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.304449 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-scripts" (OuterVolumeSpecName: "scripts") pod "139f2d63-a754-454b-8fe5-fb33c30ec2bf" (UID: "139f2d63-a754-454b-8fe5-fb33c30ec2bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.309757 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139f2d63-a754-454b-8fe5-fb33c30ec2bf-kube-api-access-fljdw" (OuterVolumeSpecName: "kube-api-access-fljdw") pod "139f2d63-a754-454b-8fe5-fb33c30ec2bf" (UID: "139f2d63-a754-454b-8fe5-fb33c30ec2bf"). InnerVolumeSpecName "kube-api-access-fljdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:10 crc kubenswrapper[4606]: E0123 17:10:10.350798 4606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data podName:139f2d63-a754-454b-8fe5-fb33c30ec2bf nodeName:}" failed. No retries permitted until 2026-01-23 17:10:10.850770683 +0000 UTC m=+1044.850963089 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data") pod "139f2d63-a754-454b-8fe5-fb33c30ec2bf" (UID: "139f2d63-a754-454b-8fe5-fb33c30ec2bf") : error deleting /var/lib/kubelet/pods/139f2d63-a754-454b-8fe5-fb33c30ec2bf/volume-subpaths: remove /var/lib/kubelet/pods/139f2d63-a754-454b-8fe5-fb33c30ec2bf/volume-subpaths: no such file or directory Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.353852 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "139f2d63-a754-454b-8fe5-fb33c30ec2bf" (UID: "139f2d63-a754-454b-8fe5-fb33c30ec2bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.389000 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60d6331-f97c-4fa2-9b03-1bb41fee8588" path="/var/lib/kubelet/pods/a60d6331-f97c-4fa2-9b03-1bb41fee8588/volumes" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.390022 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac377c7-412a-4cea-850d-817045a67cc5" path="/var/lib/kubelet/pods/aac377c7-412a-4cea-850d-817045a67cc5/volumes" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.400924 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.400961 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fljdw\" (UniqueName: \"kubernetes.io/projected/139f2d63-a754-454b-8fe5-fb33c30ec2bf-kube-api-access-fljdw\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.400973 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.400984 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139f2d63-a754-454b-8fe5-fb33c30ec2bf-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.419295 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"beedd593-20ce-48a2-a997-3ab8b04c373f","Type":"ContainerStarted","Data":"dc52a0d797d97d29a47b3122d0c9f0587d44a602fc8eb796614687ab8f29f3f4"} Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.423245 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfvh4" event={"ID":"139f2d63-a754-454b-8fe5-fb33c30ec2bf","Type":"ContainerDied","Data":"43cceeb37c9bfc58a9cda2405847779a6a7d7a376eab46b473b1e1fec38b2d7a"} Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.423277 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43cceeb37c9bfc58a9cda2405847779a6a7d7a376eab46b473b1e1fec38b2d7a" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.423338 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfvh4" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.425106 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c1be0c7-7122-4d22-abfc-8335101f93d7","Type":"ContainerStarted","Data":"ff9025e3950a2bc9e106d28752694e8699b93d293f67b576b2bf444ee7f3dab1"} Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.481969 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-876fbb8bd-6zlfb"] Jan 23 17:10:10 crc kubenswrapper[4606]: E0123 17:10:10.482337 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139f2d63-a754-454b-8fe5-fb33c30ec2bf" containerName="placement-db-sync" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.482350 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="139f2d63-a754-454b-8fe5-fb33c30ec2bf" containerName="placement-db-sync" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.482528 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="139f2d63-a754-454b-8fe5-fb33c30ec2bf" containerName="placement-db-sync" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.483428 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.486394 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.486701 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.495894 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-876fbb8bd-6zlfb"] Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.620841 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnt69\" (UniqueName: \"kubernetes.io/projected/b498e8d0-f6bd-474c-8722-c0ccf11188bf-kube-api-access-jnt69\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.621255 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b498e8d0-f6bd-474c-8722-c0ccf11188bf-logs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.621376 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-combined-ca-bundle\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.621472 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-internal-tls-certs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.621499 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-public-tls-certs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.621638 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-config-data\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.621724 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-scripts\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.723555 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-combined-ca-bundle\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.723650 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-public-tls-certs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.723666 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-internal-tls-certs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.723733 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-config-data\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.723753 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-scripts\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.723785 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b498e8d0-f6bd-474c-8722-c0ccf11188bf-logs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.723800 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnt69\" (UniqueName: \"kubernetes.io/projected/b498e8d0-f6bd-474c-8722-c0ccf11188bf-kube-api-access-jnt69\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.724979 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b498e8d0-f6bd-474c-8722-c0ccf11188bf-logs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.734038 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-internal-tls-certs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.736223 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-combined-ca-bundle\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.736249 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-public-tls-certs\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.738320 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-config-data\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.749848 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b498e8d0-f6bd-474c-8722-c0ccf11188bf-scripts\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.762093 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnt69\" (UniqueName: \"kubernetes.io/projected/b498e8d0-f6bd-474c-8722-c0ccf11188bf-kube-api-access-jnt69\") pod \"placement-876fbb8bd-6zlfb\" (UID: \"b498e8d0-f6bd-474c-8722-c0ccf11188bf\") " pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.851263 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.928469 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data\") pod \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\" (UID: \"139f2d63-a754-454b-8fe5-fb33c30ec2bf\") " Jan 23 17:10:10 crc kubenswrapper[4606]: I0123 17:10:10.934539 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data" (OuterVolumeSpecName: "config-data") pod "139f2d63-a754-454b-8fe5-fb33c30ec2bf" (UID: "139f2d63-a754-454b-8fe5-fb33c30ec2bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:11 crc kubenswrapper[4606]: I0123 17:10:11.031486 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/139f2d63-a754-454b-8fe5-fb33c30ec2bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:11 crc kubenswrapper[4606]: I0123 17:10:11.271844 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-876fbb8bd-6zlfb"] Jan 23 17:10:11 crc kubenswrapper[4606]: I0123 17:10:11.440552 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c1be0c7-7122-4d22-abfc-8335101f93d7","Type":"ContainerStarted","Data":"81202c239191192d6e7505f82860b63b8c62da4215d76afd1856e1514674ec88"} Jan 23 17:10:11 crc kubenswrapper[4606]: I0123 17:10:11.441888 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"beedd593-20ce-48a2-a997-3ab8b04c373f","Type":"ContainerStarted","Data":"a41585911dbecd7b4c8e4832aa3755c7c7e4ea568ad413a096d90dab83279c34"} Jan 23 17:10:11 crc kubenswrapper[4606]: I0123 17:10:11.470794 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-876fbb8bd-6zlfb" event={"ID":"b498e8d0-f6bd-474c-8722-c0ccf11188bf","Type":"ContainerStarted","Data":"f1be18a18b6c2824a1ad73df80c0b7e6109a735529f4c6eeb6e75bef69d7e9da"} Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.235390 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.491042 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjf95" event={"ID":"3f5144cd-47f2-4ce5-81d6-68884dbfc739","Type":"ContainerStarted","Data":"6bf66b0648928ec3da6406e29a4b4815276089c8eeb8cacd1ec04a5038cf556b"} Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.499574 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-876fbb8bd-6zlfb" event={"ID":"b498e8d0-f6bd-474c-8722-c0ccf11188bf","Type":"ContainerStarted","Data":"a3fdbae6c4ffaa5a1a122e3bfba2b9cbb049b0eeb235eddc4cf82c3247881ebe"} Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.499622 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-876fbb8bd-6zlfb" event={"ID":"b498e8d0-f6bd-474c-8722-c0ccf11188bf","Type":"ContainerStarted","Data":"a3cba9140b8b0e9af6510e563ce8f8df51abe59322dfdff03b12c3c4fb84dcd5"} Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.499755 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.504987 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c1be0c7-7122-4d22-abfc-8335101f93d7","Type":"ContainerStarted","Data":"cd6b97d70e4add71caeb23954067f81d25774803be51d1e8dba0badb66d7b904"} Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.534835 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-876fbb8bd-6zlfb" podStartSLOduration=2.534817499 podStartE2EDuration="2.534817499s" podCreationTimestamp="2026-01-23 17:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:12.532515991 +0000 UTC m=+1046.532708397" watchObservedRunningTime="2026-01-23 17:10:12.534817499 +0000 UTC m=+1046.535009905" Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.536004 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tjf95" podStartSLOduration=5.653199789 podStartE2EDuration="54.535997768s" podCreationTimestamp="2026-01-23 17:09:18 +0000 UTC" firstStartedPulling="2026-01-23 17:09:21.107511335 +0000 UTC m=+995.107703741" lastFinishedPulling="2026-01-23 17:10:09.990309314 +0000 UTC m=+1043.990501720" observedRunningTime="2026-01-23 17:10:12.512501457 +0000 UTC m=+1046.512693863" watchObservedRunningTime="2026-01-23 17:10:12.535997768 +0000 UTC m=+1046.536190174" Jan 23 17:10:12 crc kubenswrapper[4606]: I0123 17:10:12.597759 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.597738973 podStartE2EDuration="4.597738973s" podCreationTimestamp="2026-01-23 17:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:12.580099079 +0000 UTC m=+1046.580291485" watchObservedRunningTime="2026-01-23 17:10:12.597738973 +0000 UTC m=+1046.597931379" Jan 23 17:10:13 crc kubenswrapper[4606]: I0123 17:10:13.534902 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"beedd593-20ce-48a2-a997-3ab8b04c373f","Type":"ContainerStarted","Data":"ae9b3c9445087d6ceed6c9421d26afc2ec8a39b0cb9e5d8ce0c4a68e3a385ea0"} Jan 23 17:10:13 crc kubenswrapper[4606]: I0123 17:10:13.535548 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:13 crc kubenswrapper[4606]: I0123 17:10:13.572932 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.572916033 podStartE2EDuration="5.572916033s" podCreationTimestamp="2026-01-23 17:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:13.570496192 +0000 UTC m=+1047.570688608" watchObservedRunningTime="2026-01-23 17:10:13.572916033 +0000 UTC m=+1047.573108439" Jan 23 17:10:14 crc kubenswrapper[4606]: I0123 17:10:14.551153 4606 generic.go:334] "Generic (PLEG): container finished" podID="9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" containerID="8c97855e228239eb4102ba560d1b1d4c16dfd13eabe648f22a0475e5746e2e17" exitCode=0 Jan 23 17:10:14 crc kubenswrapper[4606]: I0123 17:10:14.552100 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knccn" event={"ID":"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f","Type":"ContainerDied","Data":"8c97855e228239eb4102ba560d1b1d4c16dfd13eabe648f22a0475e5746e2e17"} Jan 23 17:10:14 crc kubenswrapper[4606]: I0123 17:10:14.886957 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:14 crc kubenswrapper[4606]: I0123 17:10:14.957847 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qm9ff"] Jan 23 17:10:14 crc kubenswrapper[4606]: I0123 17:10:14.958081 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" podUID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerName="dnsmasq-dns" containerID="cri-o://b839f40f6ca194f9235c96180eaf6b787694c40fbbe0cc772dc4e2040840bb9e" gracePeriod=10 Jan 23 17:10:15 crc kubenswrapper[4606]: I0123 17:10:15.562145 4606 generic.go:334] "Generic (PLEG): container finished" podID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerID="b839f40f6ca194f9235c96180eaf6b787694c40fbbe0cc772dc4e2040840bb9e" exitCode=0 Jan 23 17:10:15 crc kubenswrapper[4606]: I0123 17:10:15.562328 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" event={"ID":"837e5e5e-4078-45ed-9c4e-bf691a2840ed","Type":"ContainerDied","Data":"b839f40f6ca194f9235c96180eaf6b787694c40fbbe0cc772dc4e2040840bb9e"} Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.867332 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knccn" Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.964331 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-fernet-keys\") pod \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.964540 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-scripts\") pod \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.964658 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-config-data\") pod \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.964698 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-credential-keys\") pod \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.964745 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-combined-ca-bundle\") pod \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.964780 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-strcg\" (UniqueName: \"kubernetes.io/projected/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-kube-api-access-strcg\") pod \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\" (UID: \"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f\") " Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.973797 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-scripts" (OuterVolumeSpecName: "scripts") pod "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" (UID: "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.977708 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" (UID: "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.982801 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" (UID: "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:16 crc kubenswrapper[4606]: I0123 17:10:16.983370 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-kube-api-access-strcg" (OuterVolumeSpecName: "kube-api-access-strcg") pod "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" (UID: "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f"). InnerVolumeSpecName "kube-api-access-strcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.041853 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" (UID: "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.063277 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-config-data" (OuterVolumeSpecName: "config-data") pod "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" (UID: "9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.067365 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.067394 4606 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.067406 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.067416 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-strcg\" (UniqueName: \"kubernetes.io/projected/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-kube-api-access-strcg\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.067427 4606 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.067436 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.246831 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.282837 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bxrk\" (UniqueName: \"kubernetes.io/projected/837e5e5e-4078-45ed-9c4e-bf691a2840ed-kube-api-access-8bxrk\") pod \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.283149 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-swift-storage-0\") pod \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.283168 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-svc\") pod \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.283235 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-nb\") pod \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.283304 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-sb\") pod \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.283397 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-config\") pod \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\" (UID: \"837e5e5e-4078-45ed-9c4e-bf691a2840ed\") " Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.327729 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837e5e5e-4078-45ed-9c4e-bf691a2840ed-kube-api-access-8bxrk" (OuterVolumeSpecName: "kube-api-access-8bxrk") pod "837e5e5e-4078-45ed-9c4e-bf691a2840ed" (UID: "837e5e5e-4078-45ed-9c4e-bf691a2840ed"). InnerVolumeSpecName "kube-api-access-8bxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.387261 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bxrk\" (UniqueName: \"kubernetes.io/projected/837e5e5e-4078-45ed-9c4e-bf691a2840ed-kube-api-access-8bxrk\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.432209 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "837e5e5e-4078-45ed-9c4e-bf691a2840ed" (UID: "837e5e5e-4078-45ed-9c4e-bf691a2840ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.441526 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-config" (OuterVolumeSpecName: "config") pod "837e5e5e-4078-45ed-9c4e-bf691a2840ed" (UID: "837e5e5e-4078-45ed-9c4e-bf691a2840ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.450749 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "837e5e5e-4078-45ed-9c4e-bf691a2840ed" (UID: "837e5e5e-4078-45ed-9c4e-bf691a2840ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.461290 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "837e5e5e-4078-45ed-9c4e-bf691a2840ed" (UID: "837e5e5e-4078-45ed-9c4e-bf691a2840ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.468139 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "837e5e5e-4078-45ed-9c4e-bf691a2840ed" (UID: "837e5e5e-4078-45ed-9c4e-bf691a2840ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.489988 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.490024 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.490033 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.490043 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.490052 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/837e5e5e-4078-45ed-9c4e-bf691a2840ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.579317 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knccn" event={"ID":"9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f","Type":"ContainerDied","Data":"a0744c4b87a50179597b9afff4053d137aacc92e1121ee80f22de3154f30f7f4"} Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.579358 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0744c4b87a50179597b9afff4053d137aacc92e1121ee80f22de3154f30f7f4" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.579419 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knccn" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.584451 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerStarted","Data":"44eeade27a61086a37897aeb567f16e935c017b5f994ba41bdfa600bc5105e20"} Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.589096 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" event={"ID":"837e5e5e-4078-45ed-9c4e-bf691a2840ed","Type":"ContainerDied","Data":"0a5e6f443ae43745fab52c2437a77a33c82bb0bf2cf8a260399696eec9a244e6"} Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.589125 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qm9ff" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.589146 4606 scope.go:117] "RemoveContainer" containerID="b839f40f6ca194f9235c96180eaf6b787694c40fbbe0cc772dc4e2040840bb9e" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.591881 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lzz5x" event={"ID":"4cb98288-9db7-4a8f-9b1d-1d57a547062b","Type":"ContainerStarted","Data":"c943a34586c3106f2095d0f74dadf0913d2d539bc0caa98725104d0537bf40fe"} Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.619457 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lzz5x" podStartSLOduration=3.835066438 podStartE2EDuration="59.619440436s" podCreationTimestamp="2026-01-23 17:09:18 +0000 UTC" firstStartedPulling="2026-01-23 17:09:21.110301374 +0000 UTC m=+995.110493780" lastFinishedPulling="2026-01-23 17:10:16.894675372 +0000 UTC m=+1050.894867778" observedRunningTime="2026-01-23 17:10:17.605916315 +0000 UTC m=+1051.606108721" watchObservedRunningTime="2026-01-23 17:10:17.619440436 +0000 UTC m=+1051.619632842" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.636289 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qm9ff"] Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.640804 4606 scope.go:117] "RemoveContainer" containerID="67e20a7e66c7596087c2c7c42e083ece7cd51d1110914a2eaa5d62fd519f1566" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.646198 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qm9ff"] Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.984752 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57c54b4584-qk24c"] Jan 23 17:10:17 crc kubenswrapper[4606]: E0123 17:10:17.985342 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerName="init" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.985358 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerName="init" Jan 23 17:10:17 crc kubenswrapper[4606]: E0123 17:10:17.985388 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerName="dnsmasq-dns" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.985394 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerName="dnsmasq-dns" Jan 23 17:10:17 crc kubenswrapper[4606]: E0123 17:10:17.985418 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" containerName="keystone-bootstrap" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.985425 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" containerName="keystone-bootstrap" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.985651 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" containerName="dnsmasq-dns" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.985687 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" containerName="keystone-bootstrap" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.986404 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.988888 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.989456 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.989671 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.990245 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.994601 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cxp6x" Jan 23 17:10:17 crc kubenswrapper[4606]: I0123 17:10:17.994755 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.001620 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57c54b4584-qk24c"] Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119229 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-combined-ca-bundle\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119300 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-internal-tls-certs\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119322 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-public-tls-certs\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119428 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-credential-keys\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119537 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-scripts\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119634 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-config-data\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119662 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-fernet-keys\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.119805 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jsj7\" (UniqueName: \"kubernetes.io/projected/9127ed79-5478-464a-9e71-6026bc46e83a-kube-api-access-6jsj7\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222371 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-internal-tls-certs\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222419 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-public-tls-certs\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222462 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-credential-keys\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222532 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-scripts\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222608 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-config-data\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222624 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-fernet-keys\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222647 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jsj7\" (UniqueName: \"kubernetes.io/projected/9127ed79-5478-464a-9e71-6026bc46e83a-kube-api-access-6jsj7\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.222710 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-combined-ca-bundle\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.228531 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-credential-keys\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.247887 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-fernet-keys\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.248032 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-internal-tls-certs\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.249714 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-config-data\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.250328 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-public-tls-certs\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.251434 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-combined-ca-bundle\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.254908 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127ed79-5478-464a-9e71-6026bc46e83a-scripts\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.277956 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jsj7\" (UniqueName: \"kubernetes.io/projected/9127ed79-5478-464a-9e71-6026bc46e83a-kube-api-access-6jsj7\") pod \"keystone-57c54b4584-qk24c\" (UID: \"9127ed79-5478-464a-9e71-6026bc46e83a\") " pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.304683 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.380292 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837e5e5e-4078-45ed-9c4e-bf691a2840ed" path="/var/lib/kubelet/pods/837e5e5e-4078-45ed-9c4e-bf691a2840ed/volumes" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.438820 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.615491 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977896f6-qrk2s" podUID="59fc19de-8c69-43e4-bf0d-1cb8ea9a9120" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.826208 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57c54b4584-qk24c"] Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.831076 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.831223 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.880208 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 17:10:18 crc kubenswrapper[4606]: I0123 17:10:18.891681 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.267122 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.267169 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.316169 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.409355 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.618132 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57c54b4584-qk24c" event={"ID":"9127ed79-5478-464a-9e71-6026bc46e83a","Type":"ContainerStarted","Data":"6b74f479eb2d9d0de66a8a0a9b6048958e99cb893f91b78febf2d826a8ec5217"} Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.618173 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.618183 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57c54b4584-qk24c" event={"ID":"9127ed79-5478-464a-9e71-6026bc46e83a","Type":"ContainerStarted","Data":"2ddb7579a9d9eb67ee2713019245ee232324a2c5a13f87437407b5276275b8e1"} Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.618193 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.618289 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.618379 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.618396 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:19 crc kubenswrapper[4606]: I0123 17:10:19.648537 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-57c54b4584-qk24c" podStartSLOduration=2.6485123980000003 podStartE2EDuration="2.648512398s" podCreationTimestamp="2026-01-23 17:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:19.635017359 +0000 UTC m=+1053.635209765" watchObservedRunningTime="2026-01-23 17:10:19.648512398 +0000 UTC m=+1053.648704804" Jan 23 17:10:20 crc kubenswrapper[4606]: I0123 17:10:20.639258 4606 generic.go:334] "Generic (PLEG): container finished" podID="3f5144cd-47f2-4ce5-81d6-68884dbfc739" containerID="6bf66b0648928ec3da6406e29a4b4815276089c8eeb8cacd1ec04a5038cf556b" exitCode=0 Jan 23 17:10:20 crc kubenswrapper[4606]: I0123 17:10:20.639353 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjf95" event={"ID":"3f5144cd-47f2-4ce5-81d6-68884dbfc739","Type":"ContainerDied","Data":"6bf66b0648928ec3da6406e29a4b4815276089c8eeb8cacd1ec04a5038cf556b"} Jan 23 17:10:21 crc kubenswrapper[4606]: I0123 17:10:21.647028 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:10:21 crc kubenswrapper[4606]: I0123 17:10:21.647285 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:10:21 crc kubenswrapper[4606]: I0123 17:10:21.647066 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:10:21 crc kubenswrapper[4606]: I0123 17:10:21.647391 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.167215 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjf95" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305092 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f5144cd-47f2-4ce5-81d6-68884dbfc739-etc-machine-id\") pod \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305138 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-scripts\") pod \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305220 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f5144cd-47f2-4ce5-81d6-68884dbfc739-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f5144cd-47f2-4ce5-81d6-68884dbfc739" (UID: "3f5144cd-47f2-4ce5-81d6-68884dbfc739"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305254 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-db-sync-config-data\") pod \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305311 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfdgz\" (UniqueName: \"kubernetes.io/projected/3f5144cd-47f2-4ce5-81d6-68884dbfc739-kube-api-access-cfdgz\") pod \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305392 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-combined-ca-bundle\") pod \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305432 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-config-data\") pod \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\" (UID: \"3f5144cd-47f2-4ce5-81d6-68884dbfc739\") " Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.305776 4606 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f5144cd-47f2-4ce5-81d6-68884dbfc739-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.320785 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3f5144cd-47f2-4ce5-81d6-68884dbfc739" (UID: "3f5144cd-47f2-4ce5-81d6-68884dbfc739"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.333696 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-scripts" (OuterVolumeSpecName: "scripts") pod "3f5144cd-47f2-4ce5-81d6-68884dbfc739" (UID: "3f5144cd-47f2-4ce5-81d6-68884dbfc739"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.384801 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5144cd-47f2-4ce5-81d6-68884dbfc739-kube-api-access-cfdgz" (OuterVolumeSpecName: "kube-api-access-cfdgz") pod "3f5144cd-47f2-4ce5-81d6-68884dbfc739" (UID: "3f5144cd-47f2-4ce5-81d6-68884dbfc739"). InnerVolumeSpecName "kube-api-access-cfdgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.394761 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f5144cd-47f2-4ce5-81d6-68884dbfc739" (UID: "3f5144cd-47f2-4ce5-81d6-68884dbfc739"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.408185 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.408220 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.408229 4606 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.408239 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfdgz\" (UniqueName: \"kubernetes.io/projected/3f5144cd-47f2-4ce5-81d6-68884dbfc739-kube-api-access-cfdgz\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.468674 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-config-data" (OuterVolumeSpecName: "config-data") pod "3f5144cd-47f2-4ce5-81d6-68884dbfc739" (UID: "3f5144cd-47f2-4ce5-81d6-68884dbfc739"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.541693 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5144cd-47f2-4ce5-81d6-68884dbfc739-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.658135 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjf95" event={"ID":"3f5144cd-47f2-4ce5-81d6-68884dbfc739","Type":"ContainerDied","Data":"4abde82dd98568e9a42b34947e3a20f8c0b41a3ffee283cc5266839d7b8cf173"} Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.658173 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4abde82dd98568e9a42b34947e3a20f8c0b41a3ffee283cc5266839d7b8cf173" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.658224 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjf95" Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.670964 4606 generic.go:334] "Generic (PLEG): container finished" podID="4cb98288-9db7-4a8f-9b1d-1d57a547062b" containerID="c943a34586c3106f2095d0f74dadf0913d2d539bc0caa98725104d0537bf40fe" exitCode=0 Jan 23 17:10:22 crc kubenswrapper[4606]: I0123 17:10:22.671007 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lzz5x" event={"ID":"4cb98288-9db7-4a8f-9b1d-1d57a547062b","Type":"ContainerDied","Data":"c943a34586c3106f2095d0f74dadf0913d2d539bc0caa98725104d0537bf40fe"} Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.008935 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:23 crc kubenswrapper[4606]: E0123 17:10:23.009519 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5144cd-47f2-4ce5-81d6-68884dbfc739" containerName="cinder-db-sync" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.009536 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5144cd-47f2-4ce5-81d6-68884dbfc739" containerName="cinder-db-sync" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.009724 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5144cd-47f2-4ce5-81d6-68884dbfc739" containerName="cinder-db-sync" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.010561 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.022717 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.022929 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.024026 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rrnn" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.024063 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.026495 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.119358 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-lptqz"] Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.121099 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.138334 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-lptqz"] Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.173306 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.173407 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.173475 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk9tc\" (UniqueName: \"kubernetes.io/projected/20bd9f99-2697-4e95-91cb-18d490d4b19f-kube-api-access-kk9tc\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.173841 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-scripts\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.173977 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20bd9f99-2697-4e95-91cb-18d490d4b19f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.174008 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.276627 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.276762 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.276839 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.276887 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk9tc\" (UniqueName: \"kubernetes.io/projected/20bd9f99-2697-4e95-91cb-18d490d4b19f-kube-api-access-kk9tc\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.276966 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-config\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.277045 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.277074 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9tw\" (UniqueName: \"kubernetes.io/projected/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-kube-api-access-5f9tw\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.277144 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-scripts\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.277167 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-svc\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.277209 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.277268 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20bd9f99-2697-4e95-91cb-18d490d4b19f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.277289 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.278422 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20bd9f99-2697-4e95-91cb-18d490d4b19f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.283309 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.286137 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-scripts\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.286523 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.291185 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.307092 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk9tc\" (UniqueName: \"kubernetes.io/projected/20bd9f99-2697-4e95-91cb-18d490d4b19f-kube-api-access-kk9tc\") pod \"cinder-scheduler-0\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.369825 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.378630 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.379453 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.379628 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-config\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.380211 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-config\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.380863 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.381438 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f9tw\" (UniqueName: \"kubernetes.io/projected/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-kube-api-access-5f9tw\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.381379 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.381520 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-svc\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.381824 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.382350 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-svc\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.400664 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.418626 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.428401 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f9tw\" (UniqueName: \"kubernetes.io/projected/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-kube-api-access-5f9tw\") pod \"dnsmasq-dns-b895b5785-lptqz\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.467500 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.471055 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.509541 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.511168 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.522024 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.522114 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5lp\" (UniqueName: \"kubernetes.io/projected/239b6b42-d315-47cf-8a72-e12639318fe7-kube-api-access-nm5lp\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.522150 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/239b6b42-d315-47cf-8a72-e12639318fe7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.522183 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data-custom\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.522233 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.522325 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-scripts\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.522348 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239b6b42-d315-47cf-8a72-e12639318fe7-logs\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.646426 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm5lp\" (UniqueName: \"kubernetes.io/projected/239b6b42-d315-47cf-8a72-e12639318fe7-kube-api-access-nm5lp\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.646795 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/239b6b42-d315-47cf-8a72-e12639318fe7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.646861 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data-custom\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.646895 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/239b6b42-d315-47cf-8a72-e12639318fe7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.649358 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.649561 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-scripts\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.649599 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239b6b42-d315-47cf-8a72-e12639318fe7-logs\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.649769 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.651782 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239b6b42-d315-47cf-8a72-e12639318fe7-logs\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.655128 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.656814 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.657704 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data-custom\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.671181 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-scripts\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.688123 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm5lp\" (UniqueName: \"kubernetes.io/projected/239b6b42-d315-47cf-8a72-e12639318fe7-kube-api-access-nm5lp\") pod \"cinder-api-0\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.772422 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.772545 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.844333 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.864127 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.986408 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:23 crc kubenswrapper[4606]: I0123 17:10:23.986809 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.144336 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-lptqz"] Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.312373 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.367932 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-combined-ca-bundle\") pod \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.368227 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-db-sync-config-data\") pod \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.368255 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf67b\" (UniqueName: \"kubernetes.io/projected/4cb98288-9db7-4a8f-9b1d-1d57a547062b-kube-api-access-sf67b\") pod \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\" (UID: \"4cb98288-9db7-4a8f-9b1d-1d57a547062b\") " Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.372391 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb98288-9db7-4a8f-9b1d-1d57a547062b-kube-api-access-sf67b" (OuterVolumeSpecName: "kube-api-access-sf67b") pod "4cb98288-9db7-4a8f-9b1d-1d57a547062b" (UID: "4cb98288-9db7-4a8f-9b1d-1d57a547062b"). InnerVolumeSpecName "kube-api-access-sf67b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.372435 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4cb98288-9db7-4a8f-9b1d-1d57a547062b" (UID: "4cb98288-9db7-4a8f-9b1d-1d57a547062b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.417650 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cb98288-9db7-4a8f-9b1d-1d57a547062b" (UID: "4cb98288-9db7-4a8f-9b1d-1d57a547062b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.433002 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.473542 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.473574 4606 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4cb98288-9db7-4a8f-9b1d-1d57a547062b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.473599 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf67b\" (UniqueName: \"kubernetes.io/projected/4cb98288-9db7-4a8f-9b1d-1d57a547062b-kube-api-access-sf67b\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.501210 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.512556 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.724280 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"239b6b42-d315-47cf-8a72-e12639318fe7","Type":"ContainerStarted","Data":"39ae4c98dc41f46f8bcfcaa2d6a38036c82213043e79f2423741394b296bf62d"} Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.735087 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20bd9f99-2697-4e95-91cb-18d490d4b19f","Type":"ContainerStarted","Data":"8bda784d3c91b6499186b6a571dd90ce285b42712bdbb3a98f1c444f40416d2d"} Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.746417 4606 generic.go:334] "Generic (PLEG): container finished" podID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerID="c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197" exitCode=0 Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.747353 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-lptqz" event={"ID":"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28","Type":"ContainerDied","Data":"c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197"} Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.747414 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-lptqz" event={"ID":"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28","Type":"ContainerStarted","Data":"687d4a573f84ad8d29aa13038e6ac1029eada0ac0567fdbeefd010a9358d70d0"} Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.760413 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lzz5x" Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.763102 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lzz5x" event={"ID":"4cb98288-9db7-4a8f-9b1d-1d57a547062b","Type":"ContainerDied","Data":"12741f81ab145166d18d791235189ede7e65f58afda33e2021c58822067741ee"} Jan 23 17:10:24 crc kubenswrapper[4606]: I0123 17:10:24.763149 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12741f81ab145166d18d791235189ede7e65f58afda33e2021c58822067741ee" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.154874 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6d46dd5979-gmzrf"] Jan 23 17:10:25 crc kubenswrapper[4606]: E0123 17:10:25.155625 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb98288-9db7-4a8f-9b1d-1d57a547062b" containerName="barbican-db-sync" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.155641 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb98288-9db7-4a8f-9b1d-1d57a547062b" containerName="barbican-db-sync" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.155857 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb98288-9db7-4a8f-9b1d-1d57a547062b" containerName="barbican-db-sync" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.156988 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.172553 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.177177 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.182865 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5kppl" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.185662 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5ccb8fb66b-52f57"] Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.187142 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.211296 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.290023 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d46dd5979-gmzrf"] Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.314580 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-config-data-custom\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.314660 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-combined-ca-bundle\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.314692 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-config-data\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.314737 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-logs\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.314901 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-kube-api-access-cgmvl\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.315122 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-combined-ca-bundle\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.315210 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-logs\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.315255 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-config-data-custom\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.315276 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-config-data\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.315320 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5jxz\" (UniqueName: \"kubernetes.io/projected/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-kube-api-access-l5jxz\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.346651 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5ccb8fb66b-52f57"] Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420192 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-combined-ca-bundle\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420245 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-logs\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420266 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-config-data-custom\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420282 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-config-data\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420300 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5jxz\" (UniqueName: \"kubernetes.io/projected/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-kube-api-access-l5jxz\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420340 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-config-data-custom\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420365 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-combined-ca-bundle\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420391 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-config-data\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420427 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-logs\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.420457 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-kube-api-access-cgmvl\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.427398 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-logs\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.440739 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-lptqz"] Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.450710 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-logs\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.467150 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-config-data-custom\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.467230 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-combined-ca-bundle\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.467361 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-config-data-custom\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.469207 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-config-data\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.471241 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-config-data\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.486227 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-combined-ca-bundle\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.532337 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmvl\" (UniqueName: \"kubernetes.io/projected/efd7225f-f2b9-4e1c-81ae-eeb9617963a2-kube-api-access-cgmvl\") pod \"barbican-keystone-listener-5ccb8fb66b-52f57\" (UID: \"efd7225f-f2b9-4e1c-81ae-eeb9617963a2\") " pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.544296 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j54ls"] Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.545981 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.546131 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5jxz\" (UniqueName: \"kubernetes.io/projected/40720f96-b314-4ba9-8f88-b8b4c33ccb5e-kube-api-access-l5jxz\") pod \"barbican-worker-6d46dd5979-gmzrf\" (UID: \"40720f96-b314-4ba9-8f88-b8b4c33ccb5e\") " pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.611455 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6d46dd5979-gmzrf" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.656254 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-config\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.656328 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcnbg\" (UniqueName: \"kubernetes.io/projected/bd37ba52-7ce1-4151-8764-14a60ee315bc-kube-api-access-zcnbg\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.656354 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.656392 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.656417 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.656435 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.700785 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j54ls"] Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.703458 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.762388 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-config\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.762466 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcnbg\" (UniqueName: \"kubernetes.io/projected/bd37ba52-7ce1-4151-8764-14a60ee315bc-kube-api-access-zcnbg\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.762493 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.762537 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.762571 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.762619 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.763543 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.764126 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-config\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.765087 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.780482 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.789932 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.829696 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcnbg\" (UniqueName: \"kubernetes.io/projected/bd37ba52-7ce1-4151-8764-14a60ee315bc-kube-api-access-zcnbg\") pod \"dnsmasq-dns-5c9776ccc5-j54ls\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.914309 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.915256 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5c8f9f6b46-pqjgt"] Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.916782 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.928497 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 17:10:25 crc kubenswrapper[4606]: I0123 17:10:25.961506 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c8f9f6b46-pqjgt"] Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.074804 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.075138 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzzlj\" (UniqueName: \"kubernetes.io/projected/342b1b35-f2a3-48a0-9342-23f3d0fa9907-kube-api-access-dzzlj\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.075158 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342b1b35-f2a3-48a0-9342-23f3d0fa9907-logs\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.075201 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data-custom\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.075258 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-combined-ca-bundle\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.176971 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.177073 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzzlj\" (UniqueName: \"kubernetes.io/projected/342b1b35-f2a3-48a0-9342-23f3d0fa9907-kube-api-access-dzzlj\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.177094 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342b1b35-f2a3-48a0-9342-23f3d0fa9907-logs\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.177133 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data-custom\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.177183 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-combined-ca-bundle\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.181941 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342b1b35-f2a3-48a0-9342-23f3d0fa9907-logs\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.193083 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data-custom\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.195402 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.209924 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-combined-ca-bundle\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.210457 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzzlj\" (UniqueName: \"kubernetes.io/projected/342b1b35-f2a3-48a0-9342-23f3d0fa9907-kube-api-access-dzzlj\") pod \"barbican-api-5c8f9f6b46-pqjgt\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.298191 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.555205 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.620486 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5ccb8fb66b-52f57"] Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.653629 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j54ls"] Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.913743 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" event={"ID":"bd37ba52-7ce1-4151-8764-14a60ee315bc","Type":"ContainerStarted","Data":"d927c839ed3b302e67c388acc71a3bc4bdbfea8c123b72d19ada8402d1f8c25f"} Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.916402 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-lptqz" event={"ID":"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28","Type":"ContainerStarted","Data":"1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13"} Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.916541 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b895b5785-lptqz" podUID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerName="dnsmasq-dns" containerID="cri-o://1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13" gracePeriod=10 Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.917020 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.951798 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" event={"ID":"efd7225f-f2b9-4e1c-81ae-eeb9617963a2","Type":"ContainerStarted","Data":"32278ba6b9003cffd8fb153e15012cbe54586b7f7d943c2b8147916b5407e53c"} Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.958183 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b895b5785-lptqz" podStartSLOduration=3.958166215 podStartE2EDuration="3.958166215s" podCreationTimestamp="2026-01-23 17:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:26.951180449 +0000 UTC m=+1060.951372845" watchObservedRunningTime="2026-01-23 17:10:26.958166215 +0000 UTC m=+1060.958358621" Jan 23 17:10:26 crc kubenswrapper[4606]: I0123 17:10:26.965479 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"239b6b42-d315-47cf-8a72-e12639318fe7","Type":"ContainerStarted","Data":"fd89c0a4f7e677bd674d591f6849832be13c9ccf8e4292939beccbd739ec576f"} Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.044511 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6d46dd5979-gmzrf"] Jan 23 17:10:27 crc kubenswrapper[4606]: W0123 17:10:27.109989 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40720f96_b314_4ba9_8f88_b8b4c33ccb5e.slice/crio-1f07b8416ebd38448ce7fab4bf472ae73ae62ac57853fbad157d5d206fdc1ed4 WatchSource:0}: Error finding container 1f07b8416ebd38448ce7fab4bf472ae73ae62ac57853fbad157d5d206fdc1ed4: Status 404 returned error can't find the container with id 1f07b8416ebd38448ce7fab4bf472ae73ae62ac57853fbad157d5d206fdc1ed4 Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.242620 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c8f9f6b46-pqjgt"] Jan 23 17:10:27 crc kubenswrapper[4606]: W0123 17:10:27.408320 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342b1b35_f2a3_48a0_9342_23f3d0fa9907.slice/crio-7f3cd85a08251ed7aea585d240ae41e88f24853d50b9d16b9db406a4e68dfc21 WatchSource:0}: Error finding container 7f3cd85a08251ed7aea585d240ae41e88f24853d50b9d16b9db406a4e68dfc21: Status 404 returned error can't find the container with id 7f3cd85a08251ed7aea585d240ae41e88f24853d50b9d16b9db406a4e68dfc21 Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.769864 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.830713 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-nb\") pod \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.830811 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-sb\") pod \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.831010 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9tw\" (UniqueName: \"kubernetes.io/projected/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-kube-api-access-5f9tw\") pod \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.831034 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-config\") pod \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.831090 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-svc\") pod \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.831129 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-swift-storage-0\") pod \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\" (UID: \"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28\") " Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.891104 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-kube-api-access-5f9tw" (OuterVolumeSpecName: "kube-api-access-5f9tw") pod "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" (UID: "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28"). InnerVolumeSpecName "kube-api-access-5f9tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:27 crc kubenswrapper[4606]: I0123 17:10:27.934370 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f9tw\" (UniqueName: \"kubernetes.io/projected/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-kube-api-access-5f9tw\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.012213 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20bd9f99-2697-4e95-91cb-18d490d4b19f","Type":"ContainerStarted","Data":"543bfcaac3c348f86359b6cadacb4e251cdce8083b2bf54917046ece0a420471"} Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.022122 4606 generic.go:334] "Generic (PLEG): container finished" podID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerID="65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7" exitCode=0 Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.022183 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" event={"ID":"bd37ba52-7ce1-4151-8764-14a60ee315bc","Type":"ContainerDied","Data":"65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7"} Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.040450 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d46dd5979-gmzrf" event={"ID":"40720f96-b314-4ba9-8f88-b8b4c33ccb5e","Type":"ContainerStarted","Data":"1f07b8416ebd38448ce7fab4bf472ae73ae62ac57853fbad157d5d206fdc1ed4"} Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.042380 4606 generic.go:334] "Generic (PLEG): container finished" podID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerID="1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13" exitCode=0 Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.042465 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-lptqz" event={"ID":"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28","Type":"ContainerDied","Data":"1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13"} Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.042501 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-lptqz" event={"ID":"2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28","Type":"ContainerDied","Data":"687d4a573f84ad8d29aa13038e6ac1029eada0ac0567fdbeefd010a9358d70d0"} Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.042535 4606 scope.go:117] "RemoveContainer" containerID="1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.042713 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-lptqz" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.096067 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" event={"ID":"342b1b35-f2a3-48a0-9342-23f3d0fa9907","Type":"ContainerStarted","Data":"7f3cd85a08251ed7aea585d240ae41e88f24853d50b9d16b9db406a4e68dfc21"} Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.334681 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-config" (OuterVolumeSpecName: "config") pod "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" (UID: "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.347975 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.382837 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" (UID: "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.451875 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.454993 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.479632 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" (UID: "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.480047 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" (UID: "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.511005 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" (UID: "2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.556451 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.556488 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.556500 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.612636 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977896f6-qrk2s" podUID="59fc19de-8c69-43e4-bf0d-1cb8ea9a9120" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.625411 4606 scope.go:117] "RemoveContainer" containerID="c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.708672 4606 scope.go:117] "RemoveContainer" containerID="1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13" Jan 23 17:10:28 crc kubenswrapper[4606]: E0123 17:10:28.709986 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13\": container with ID starting with 1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13 not found: ID does not exist" containerID="1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.710022 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13"} err="failed to get container status \"1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13\": rpc error: code = NotFound desc = could not find container \"1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13\": container with ID starting with 1e3c396c09a5b1e724d45f2624ff44a9a91781e53ce91c2e79023072d61a2b13 not found: ID does not exist" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.710047 4606 scope.go:117] "RemoveContainer" containerID="c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197" Jan 23 17:10:28 crc kubenswrapper[4606]: E0123 17:10:28.710365 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197\": container with ID starting with c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197 not found: ID does not exist" containerID="c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.710394 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197"} err="failed to get container status \"c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197\": rpc error: code = NotFound desc = could not find container \"c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197\": container with ID starting with c35838b491a5a97be57a4fbd084e504c32b358d404f107514078105606c60197 not found: ID does not exist" Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.723642 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-lptqz"] Jan 23 17:10:28 crc kubenswrapper[4606]: I0123 17:10:28.752088 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-lptqz"] Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.126960 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"239b6b42-d315-47cf-8a72-e12639318fe7","Type":"ContainerStarted","Data":"d8cf6aac1e30e29f9e0e8c64b6019f889faca9b7c04a7205c7c4a8a914e1edcc"} Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.127703 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api" containerID="cri-o://d8cf6aac1e30e29f9e0e8c64b6019f889faca9b7c04a7205c7c4a8a914e1edcc" gracePeriod=30 Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.127711 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.128099 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api-log" containerID="cri-o://fd89c0a4f7e677bd674d591f6849832be13c9ccf8e4292939beccbd739ec576f" gracePeriod=30 Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.146340 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" event={"ID":"342b1b35-f2a3-48a0-9342-23f3d0fa9907","Type":"ContainerStarted","Data":"f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c"} Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.146397 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" event={"ID":"342b1b35-f2a3-48a0-9342-23f3d0fa9907","Type":"ContainerStarted","Data":"98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c"} Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.146926 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.147110 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.168678 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.168657067 podStartE2EDuration="6.168657067s" podCreationTimestamp="2026-01-23 17:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:29.155553946 +0000 UTC m=+1063.155746362" watchObservedRunningTime="2026-01-23 17:10:29.168657067 +0000 UTC m=+1063.168849463" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.170437 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20bd9f99-2697-4e95-91cb-18d490d4b19f","Type":"ContainerStarted","Data":"08cd2289d6d1fa4c30ac5a838a884f3e8310bb5f8ac14a4ba06c1d2a42e2b029"} Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.183241 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" podStartSLOduration=4.183225113 podStartE2EDuration="4.183225113s" podCreationTimestamp="2026-01-23 17:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:29.180272939 +0000 UTC m=+1063.180465345" watchObservedRunningTime="2026-01-23 17:10:29.183225113 +0000 UTC m=+1063.183417519" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.183529 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" event={"ID":"bd37ba52-7ce1-4151-8764-14a60ee315bc","Type":"ContainerStarted","Data":"5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0"} Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.183727 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.225571 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.164728812 podStartE2EDuration="7.225553869s" podCreationTimestamp="2026-01-23 17:10:22 +0000 UTC" firstStartedPulling="2026-01-23 17:10:23.890017622 +0000 UTC m=+1057.890210028" lastFinishedPulling="2026-01-23 17:10:24.950842679 +0000 UTC m=+1058.951035085" observedRunningTime="2026-01-23 17:10:29.21047865 +0000 UTC m=+1063.210671056" watchObservedRunningTime="2026-01-23 17:10:29.225553869 +0000 UTC m=+1063.225746275" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.241023 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" podStartSLOduration=4.241009469 podStartE2EDuration="4.241009469s" podCreationTimestamp="2026-01-23 17:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:29.238789513 +0000 UTC m=+1063.238981919" watchObservedRunningTime="2026-01-23 17:10:29.241009469 +0000 UTC m=+1063.241201875" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.325645 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d4bdcdf98-bmdhd"] Jan 23 17:10:29 crc kubenswrapper[4606]: E0123 17:10:29.326241 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerName="init" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.326310 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerName="init" Jan 23 17:10:29 crc kubenswrapper[4606]: E0123 17:10:29.326378 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerName="dnsmasq-dns" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.326435 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerName="dnsmasq-dns" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.326669 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" containerName="dnsmasq-dns" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.327654 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.335666 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d4bdcdf98-bmdhd"] Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.337909 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.338137 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.373809 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-combined-ca-bundle\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.373858 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-logs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.373885 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-internal-tls-certs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.373904 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-config-data\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.373954 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65qw\" (UniqueName: \"kubernetes.io/projected/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-kube-api-access-q65qw\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.374001 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-config-data-custom\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.374061 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-public-tls-certs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.476038 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-public-tls-certs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.476353 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-combined-ca-bundle\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.476383 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-logs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.476402 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-internal-tls-certs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.476422 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-config-data\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.476478 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q65qw\" (UniqueName: \"kubernetes.io/projected/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-kube-api-access-q65qw\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.476537 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-config-data-custom\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.480730 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-logs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.503552 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-combined-ca-bundle\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.504234 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q65qw\" (UniqueName: \"kubernetes.io/projected/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-kube-api-access-q65qw\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.504296 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-internal-tls-certs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.506774 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-config-data\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.510036 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-config-data-custom\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.510537 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71cd544-6079-402d-a59f-0a3d4fb1d1f7-public-tls-certs\") pod \"barbican-api-d4bdcdf98-bmdhd\" (UID: \"f71cd544-6079-402d-a59f-0a3d4fb1d1f7\") " pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.661727 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.934097 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.934144 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.934183 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.934818 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cef2b80453103e1f146f2cbf852884e416a5490d931dcd7771b99f87019e13d"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:10:29 crc kubenswrapper[4606]: I0123 17:10:29.934868 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://3cef2b80453103e1f146f2cbf852884e416a5490d931dcd7771b99f87019e13d" gracePeriod=600 Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.175679 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.243871 4606 generic.go:334] "Generic (PLEG): container finished" podID="239b6b42-d315-47cf-8a72-e12639318fe7" containerID="fd89c0a4f7e677bd674d591f6849832be13c9ccf8e4292939beccbd739ec576f" exitCode=143 Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.243997 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"239b6b42-d315-47cf-8a72-e12639318fe7","Type":"ContainerDied","Data":"fd89c0a4f7e677bd674d591f6849832be13c9ccf8e4292939beccbd739ec576f"} Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.248763 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="3cef2b80453103e1f146f2cbf852884e416a5490d931dcd7771b99f87019e13d" exitCode=0 Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.248805 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"3cef2b80453103e1f146f2cbf852884e416a5490d931dcd7771b99f87019e13d"} Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.248870 4606 scope.go:117] "RemoveContainer" containerID="8f3c9d6abf9f7db362dce7fd403f9c2cfbf053099332de8a97b1275b83560c1e" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.404007 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28" path="/var/lib/kubelet/pods/2ba563ee-07c4-4fa0-a8a0-3a46ccab5b28/volumes" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.503996 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54675cc74f-8dftn"] Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.507373 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54675cc74f-8dftn" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-api" containerID="cri-o://9ef667d4c1e72c38d3bcdf6cd6e1616b6b782e1b711e8b6ee1035a1f64e4da0c" gracePeriod=30 Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.507960 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54675cc74f-8dftn" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-httpd" containerID="cri-o://1096d81101b53aee4e2489b7c87c329beaacc44735756410914bf61b364938a4" gracePeriod=30 Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.552596 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54675cc74f-8dftn" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": EOF" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.557643 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6ffcd54777-hkxkz"] Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.559961 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.568645 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ffcd54777-hkxkz"] Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.614239 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-combined-ca-bundle\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.614291 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-httpd-config\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.614333 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-public-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.614405 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-internal-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.614421 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-ovndb-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.614447 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sh6h\" (UniqueName: \"kubernetes.io/projected/7a926781-d4e8-4d2a-b8fd-72d4d6273927-kube-api-access-6sh6h\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.614476 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-config\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.715306 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sh6h\" (UniqueName: \"kubernetes.io/projected/7a926781-d4e8-4d2a-b8fd-72d4d6273927-kube-api-access-6sh6h\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.715366 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-config\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.715415 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-combined-ca-bundle\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.715442 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-httpd-config\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.716207 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-public-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.716308 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-internal-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.716328 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-ovndb-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.723319 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-config\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.723874 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-ovndb-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.725306 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-httpd-config\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.726003 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-combined-ca-bundle\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.728546 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-public-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.746601 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sh6h\" (UniqueName: \"kubernetes.io/projected/7a926781-d4e8-4d2a-b8fd-72d4d6273927-kube-api-access-6sh6h\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.747516 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a926781-d4e8-4d2a-b8fd-72d4d6273927-internal-tls-certs\") pod \"neutron-6ffcd54777-hkxkz\" (UID: \"7a926781-d4e8-4d2a-b8fd-72d4d6273927\") " pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:30 crc kubenswrapper[4606]: I0123 17:10:30.878179 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:31 crc kubenswrapper[4606]: I0123 17:10:31.270020 4606 generic.go:334] "Generic (PLEG): container finished" podID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerID="1096d81101b53aee4e2489b7c87c329beaacc44735756410914bf61b364938a4" exitCode=0 Jan 23 17:10:31 crc kubenswrapper[4606]: I0123 17:10:31.270068 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54675cc74f-8dftn" event={"ID":"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea","Type":"ContainerDied","Data":"1096d81101b53aee4e2489b7c87c329beaacc44735756410914bf61b364938a4"} Jan 23 17:10:32 crc kubenswrapper[4606]: I0123 17:10:32.360776 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-54675cc74f-8dftn" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": dial tcp 10.217.0.153:9696: connect: connection refused" Jan 23 17:10:33 crc kubenswrapper[4606]: I0123 17:10:33.291233 4606 generic.go:334] "Generic (PLEG): container finished" podID="e23f5510-b5f6-42a1-b751-caa502fba254" containerID="a4544f2d7c715ef9f85ecb2a81fd369d6138639b3297403e2e4df3d4e4b1471a" exitCode=137 Jan 23 17:10:33 crc kubenswrapper[4606]: I0123 17:10:33.291466 4606 generic.go:334] "Generic (PLEG): container finished" podID="e23f5510-b5f6-42a1-b751-caa502fba254" containerID="73dd298dd581b5c8c71be4bcd7a660571fdcf22b16ce794580fc0b943c63328a" exitCode=137 Jan 23 17:10:33 crc kubenswrapper[4606]: I0123 17:10:33.291486 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7bf8f6df-4w647" event={"ID":"e23f5510-b5f6-42a1-b751-caa502fba254","Type":"ContainerDied","Data":"a4544f2d7c715ef9f85ecb2a81fd369d6138639b3297403e2e4df3d4e4b1471a"} Jan 23 17:10:33 crc kubenswrapper[4606]: I0123 17:10:33.291510 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7bf8f6df-4w647" event={"ID":"e23f5510-b5f6-42a1-b751-caa502fba254","Type":"ContainerDied","Data":"73dd298dd581b5c8c71be4bcd7a660571fdcf22b16ce794580fc0b943c63328a"} Jan 23 17:10:33 crc kubenswrapper[4606]: I0123 17:10:33.386951 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 17:10:33 crc kubenswrapper[4606]: I0123 17:10:33.586780 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 17:10:34 crc kubenswrapper[4606]: I0123 17:10:34.300880 4606 generic.go:334] "Generic (PLEG): container finished" podID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerID="9ef667d4c1e72c38d3bcdf6cd6e1616b6b782e1b711e8b6ee1035a1f64e4da0c" exitCode=0 Jan 23 17:10:34 crc kubenswrapper[4606]: I0123 17:10:34.300948 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54675cc74f-8dftn" event={"ID":"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea","Type":"ContainerDied","Data":"9ef667d4c1e72c38d3bcdf6cd6e1616b6b782e1b711e8b6ee1035a1f64e4da0c"} Jan 23 17:10:34 crc kubenswrapper[4606]: I0123 17:10:34.365187 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:35 crc kubenswrapper[4606]: I0123 17:10:35.309272 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="cinder-scheduler" containerID="cri-o://543bfcaac3c348f86359b6cadacb4e251cdce8083b2bf54917046ece0a420471" gracePeriod=30 Jan 23 17:10:35 crc kubenswrapper[4606]: I0123 17:10:35.309598 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="probe" containerID="cri-o://08cd2289d6d1fa4c30ac5a838a884f3e8310bb5f8ac14a4ba06c1d2a42e2b029" gracePeriod=30 Jan 23 17:10:35 crc kubenswrapper[4606]: I0123 17:10:35.916768 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:10:35 crc kubenswrapper[4606]: I0123 17:10:35.994455 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5gbjl"] Jan 23 17:10:35 crc kubenswrapper[4606]: I0123 17:10:35.994988 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" podUID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerName="dnsmasq-dns" containerID="cri-o://05af7e4c5a62803403e4c83e43f496365e5314ccd0824e748cfb6311601bc723" gracePeriod=10 Jan 23 17:10:36 crc kubenswrapper[4606]: I0123 17:10:36.187703 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 17:10:36 crc kubenswrapper[4606]: I0123 17:10:36.347465 4606 generic.go:334] "Generic (PLEG): container finished" podID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerID="08cd2289d6d1fa4c30ac5a838a884f3e8310bb5f8ac14a4ba06c1d2a42e2b029" exitCode=0 Jan 23 17:10:36 crc kubenswrapper[4606]: I0123 17:10:36.347599 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20bd9f99-2697-4e95-91cb-18d490d4b19f","Type":"ContainerDied","Data":"08cd2289d6d1fa4c30ac5a838a884f3e8310bb5f8ac14a4ba06c1d2a42e2b029"} Jan 23 17:10:36 crc kubenswrapper[4606]: I0123 17:10:36.349661 4606 generic.go:334] "Generic (PLEG): container finished" podID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerID="05af7e4c5a62803403e4c83e43f496365e5314ccd0824e748cfb6311601bc723" exitCode=0 Jan 23 17:10:36 crc kubenswrapper[4606]: I0123 17:10:36.349685 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" event={"ID":"c747544b-d43a-46bf-8b8a-dfab974a8788","Type":"ContainerDied","Data":"05af7e4c5a62803403e4c83e43f496365e5314ccd0824e748cfb6311601bc723"} Jan 23 17:10:37 crc kubenswrapper[4606]: I0123 17:10:37.379913 4606 generic.go:334] "Generic (PLEG): container finished" podID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerID="543bfcaac3c348f86359b6cadacb4e251cdce8083b2bf54917046ece0a420471" exitCode=0 Jan 23 17:10:37 crc kubenswrapper[4606]: I0123 17:10:37.380262 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20bd9f99-2697-4e95-91cb-18d490d4b19f","Type":"ContainerDied","Data":"543bfcaac3c348f86359b6cadacb4e251cdce8083b2bf54917046ece0a420471"} Jan 23 17:10:38 crc kubenswrapper[4606]: I0123 17:10:38.177013 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:38 crc kubenswrapper[4606]: I0123 17:10:38.768170 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:38 crc kubenswrapper[4606]: I0123 17:10:38.907693 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:38 crc kubenswrapper[4606]: I0123 17:10:38.960749 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.019393 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030352 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-swift-storage-0\") pod \"c747544b-d43a-46bf-8b8a-dfab974a8788\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030396 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snld4\" (UniqueName: \"kubernetes.io/projected/c747544b-d43a-46bf-8b8a-dfab974a8788-kube-api-access-snld4\") pod \"c747544b-d43a-46bf-8b8a-dfab974a8788\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030429 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-config\") pod \"c747544b-d43a-46bf-8b8a-dfab974a8788\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030484 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-nb\") pod \"c747544b-d43a-46bf-8b8a-dfab974a8788\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030534 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-ovndb-tls-certs\") pod \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030562 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-httpd-config\") pod \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030580 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-config\") pod \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030611 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-internal-tls-certs\") pod \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030636 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-sb\") pod \"c747544b-d43a-46bf-8b8a-dfab974a8788\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030681 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-svc\") pod \"c747544b-d43a-46bf-8b8a-dfab974a8788\" (UID: \"c747544b-d43a-46bf-8b8a-dfab974a8788\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030726 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm7tr\" (UniqueName: \"kubernetes.io/projected/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-kube-api-access-dm7tr\") pod \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030839 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-public-tls-certs\") pod \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.030859 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-combined-ca-bundle\") pod \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\" (UID: \"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.094800 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-kube-api-access-dm7tr" (OuterVolumeSpecName: "kube-api-access-dm7tr") pod "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" (UID: "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea"). InnerVolumeSpecName "kube-api-access-dm7tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.094981 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c747544b-d43a-46bf-8b8a-dfab974a8788-kube-api-access-snld4" (OuterVolumeSpecName: "kube-api-access-snld4") pod "c747544b-d43a-46bf-8b8a-dfab974a8788" (UID: "c747544b-d43a-46bf-8b8a-dfab974a8788"). InnerVolumeSpecName "kube-api-access-snld4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.127925 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" (UID: "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.135486 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e23f5510-b5f6-42a1-b751-caa502fba254-horizon-secret-key\") pod \"e23f5510-b5f6-42a1-b751-caa502fba254\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.135692 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj762\" (UniqueName: \"kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762\") pod \"e23f5510-b5f6-42a1-b751-caa502fba254\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.135911 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-config-data\") pod \"e23f5510-b5f6-42a1-b751-caa502fba254\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.136055 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e23f5510-b5f6-42a1-b751-caa502fba254-logs\") pod \"e23f5510-b5f6-42a1-b751-caa502fba254\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.136261 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-scripts\") pod \"e23f5510-b5f6-42a1-b751-caa502fba254\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.137220 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snld4\" (UniqueName: \"kubernetes.io/projected/c747544b-d43a-46bf-8b8a-dfab974a8788-kube-api-access-snld4\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.137353 4606 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.137471 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm7tr\" (UniqueName: \"kubernetes.io/projected/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-kube-api-access-dm7tr\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.139063 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e23f5510-b5f6-42a1-b751-caa502fba254-logs" (OuterVolumeSpecName: "logs") pod "e23f5510-b5f6-42a1-b751-caa502fba254" (UID: "e23f5510-b5f6-42a1-b751-caa502fba254"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.164044 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23f5510-b5f6-42a1-b751-caa502fba254-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e23f5510-b5f6-42a1-b751-caa502fba254" (UID: "e23f5510-b5f6-42a1-b751-caa502fba254"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.179504 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.238865 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762" (OuterVolumeSpecName: "kube-api-access-jj762") pod "e23f5510-b5f6-42a1-b751-caa502fba254" (UID: "e23f5510-b5f6-42a1-b751-caa502fba254"). InnerVolumeSpecName "kube-api-access-jj762". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.239378 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk9tc\" (UniqueName: \"kubernetes.io/projected/20bd9f99-2697-4e95-91cb-18d490d4b19f-kube-api-access-kk9tc\") pod \"20bd9f99-2697-4e95-91cb-18d490d4b19f\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.239413 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-combined-ca-bundle\") pod \"20bd9f99-2697-4e95-91cb-18d490d4b19f\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.239453 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data-custom\") pod \"20bd9f99-2697-4e95-91cb-18d490d4b19f\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.239515 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data\") pod \"20bd9f99-2697-4e95-91cb-18d490d4b19f\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.239570 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj762\" (UniqueName: \"kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762\") pod \"e23f5510-b5f6-42a1-b751-caa502fba254\" (UID: \"e23f5510-b5f6-42a1-b751-caa502fba254\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.239640 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-scripts\") pod \"20bd9f99-2697-4e95-91cb-18d490d4b19f\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.239695 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20bd9f99-2697-4e95-91cb-18d490d4b19f-etc-machine-id\") pod \"20bd9f99-2697-4e95-91cb-18d490d4b19f\" (UID: \"20bd9f99-2697-4e95-91cb-18d490d4b19f\") " Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.240026 4606 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e23f5510-b5f6-42a1-b751-caa502fba254-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.240038 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e23f5510-b5f6-42a1-b751-caa502fba254-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.240066 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20bd9f99-2697-4e95-91cb-18d490d4b19f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "20bd9f99-2697-4e95-91cb-18d490d4b19f" (UID: "20bd9f99-2697-4e95-91cb-18d490d4b19f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: W0123 17:10:39.242673 4606 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e23f5510-b5f6-42a1-b751-caa502fba254/volumes/kubernetes.io~projected/kube-api-access-jj762 Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.242740 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762" (OuterVolumeSpecName: "kube-api-access-jj762") pod "e23f5510-b5f6-42a1-b751-caa502fba254" (UID: "e23f5510-b5f6-42a1-b751-caa502fba254"). InnerVolumeSpecName "kube-api-access-jj762". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.258061 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "20bd9f99-2697-4e95-91cb-18d490d4b19f" (UID: "20bd9f99-2697-4e95-91cb-18d490d4b19f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.312295 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-scripts" (OuterVolumeSpecName: "scripts") pod "20bd9f99-2697-4e95-91cb-18d490d4b19f" (UID: "20bd9f99-2697-4e95-91cb-18d490d4b19f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.342539 4606 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.342569 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj762\" (UniqueName: \"kubernetes.io/projected/e23f5510-b5f6-42a1-b751-caa502fba254-kube-api-access-jj762\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.342579 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.342611 4606 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20bd9f99-2697-4e95-91cb-18d490d4b19f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.374015 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bd9f99-2697-4e95-91cb-18d490d4b19f-kube-api-access-kk9tc" (OuterVolumeSpecName: "kube-api-access-kk9tc") pod "20bd9f99-2697-4e95-91cb-18d490d4b19f" (UID: "20bd9f99-2697-4e95-91cb-18d490d4b19f"). InnerVolumeSpecName "kube-api-access-kk9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.444968 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk9tc\" (UniqueName: \"kubernetes.io/projected/20bd9f99-2697-4e95-91cb-18d490d4b19f-kube-api-access-kk9tc\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.468388 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-scripts" (OuterVolumeSpecName: "scripts") pod "e23f5510-b5f6-42a1-b751-caa502fba254" (UID: "e23f5510-b5f6-42a1-b751-caa502fba254"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.512324 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"c0aff6847a831032f0ede7590ffa543c8a1f938be08add052c2862cc2b6c92af"} Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.548468 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.641263 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54675cc74f-8dftn" event={"ID":"0e50cc4c-a763-4fd7-b227-2f1afbfcaaea","Type":"ContainerDied","Data":"890d5da34653f4e61a71e9dd04a56cdc71bf870f9fa6c703915e380c1fefe4b4"} Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.641311 4606 scope.go:117] "RemoveContainer" containerID="1096d81101b53aee4e2489b7c87c329beaacc44735756410914bf61b364938a4" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.641730 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54675cc74f-8dftn" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.705270 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c747544b-d43a-46bf-8b8a-dfab974a8788" (UID: "c747544b-d43a-46bf-8b8a-dfab974a8788"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.705531 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7bf8f6df-4w647" event={"ID":"e23f5510-b5f6-42a1-b751-caa502fba254","Type":"ContainerDied","Data":"7b16d14a873d06bb4dae1a85b68767f50b5af2ec6f2fa642afd557d005bebcfc"} Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.705688 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7bf8f6df-4w647" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.736334 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d4bdcdf98-bmdhd"] Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.762141 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"20bd9f99-2697-4e95-91cb-18d490d4b19f","Type":"ContainerDied","Data":"8bda784d3c91b6499186b6a571dd90ce285b42712bdbb3a98f1c444f40416d2d"} Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.764101 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.807328 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" event={"ID":"c747544b-d43a-46bf-8b8a-dfab974a8788","Type":"ContainerDied","Data":"0be1338f38a201c3a0304fcb6d25b7777711a7fa6fb1ac84d095218e5474f2fc"} Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.810690 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5gbjl" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.837753 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20bd9f99-2697-4e95-91cb-18d490d4b19f" (UID: "20bd9f99-2697-4e95-91cb-18d490d4b19f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.852359 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.852384 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:39 crc kubenswrapper[4606]: I0123 17:10:39.879070 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ffcd54777-hkxkz"] Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.172242 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a5feb913-77c6-425c-9018-de42010da64e" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.191912 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-config" (OuterVolumeSpecName: "config") pod "c747544b-d43a-46bf-8b8a-dfab974a8788" (UID: "c747544b-d43a-46bf-8b8a-dfab974a8788"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.196202 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c747544b-d43a-46bf-8b8a-dfab974a8788" (UID: "c747544b-d43a-46bf-8b8a-dfab974a8788"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.223812 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-config-data" (OuterVolumeSpecName: "config-data") pod "e23f5510-b5f6-42a1-b751-caa502fba254" (UID: "e23f5510-b5f6-42a1-b751-caa502fba254"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.252155 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" (UID: "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.276801 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.276833 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.276842 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.276851 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e23f5510-b5f6-42a1-b751-caa502fba254-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.338953 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c747544b-d43a-46bf-8b8a-dfab974a8788" (UID: "c747544b-d43a-46bf-8b8a-dfab974a8788"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.339956 4606 scope.go:117] "RemoveContainer" containerID="9ef667d4c1e72c38d3bcdf6cd6e1616b6b782e1b711e8b6ee1035a1f64e4da0c" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.379308 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.406810 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" (UID: "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.442098 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c747544b-d43a-46bf-8b8a-dfab974a8788" (UID: "c747544b-d43a-46bf-8b8a-dfab974a8788"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.442202 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" (UID: "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.468280 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b7bf8f6df-4w647"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.468314 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b7bf8f6df-4w647"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.469575 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" (UID: "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.471735 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-config" (OuterVolumeSpecName: "config") pod "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" (UID: "0e50cc4c-a763-4fd7-b227-2f1afbfcaaea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.483992 4606 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.484030 4606 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.484040 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.484050 4606 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.484061 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c747544b-d43a-46bf-8b8a-dfab974a8788-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.515778 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data" (OuterVolumeSpecName: "config-data") pod "20bd9f99-2697-4e95-91cb-18d490d4b19f" (UID: "20bd9f99-2697-4e95-91cb-18d490d4b19f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.587857 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20bd9f99-2697-4e95-91cb-18d490d4b19f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.620085 4606 scope.go:117] "RemoveContainer" containerID="a4544f2d7c715ef9f85ecb2a81fd369d6138639b3297403e2e4df3d4e4b1471a" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.784919 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54675cc74f-8dftn"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.845725 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54675cc74f-8dftn"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.858427 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.884626 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.884791 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" event={"ID":"efd7225f-f2b9-4e1c-81ae-eeb9617963a2","Type":"ContainerStarted","Data":"6bd8dd0b748dd36f08cb5fc7f21add6692dafc90b7ae9d48acc960019f456056"} Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.884812 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" event={"ID":"efd7225f-f2b9-4e1c-81ae-eeb9617963a2","Type":"ContainerStarted","Data":"194ee56a7f54f34985fcbcc2f78dfe16593eea656948a6cc4c9d264cdfff4568"} Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.893637 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4bdcdf98-bmdhd" event={"ID":"f71cd544-6079-402d-a59f-0a3d4fb1d1f7","Type":"ContainerStarted","Data":"dab217eff71184bc637ecb48a7bf68597ab805b9161e3a561edbd5485e79da42"} Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.898909 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5gbjl"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.919123 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffcd54777-hkxkz" event={"ID":"7a926781-d4e8-4d2a-b8fd-72d4d6273927","Type":"ContainerStarted","Data":"644ded21857a88b865c35f00bb6b84b1b36c2dedd4990f457aa314f8294cb66e"} Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.929380 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5gbjl"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.950816 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951145 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-api" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951162 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-api" Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951173 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-httpd" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951181 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-httpd" Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951189 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerName="init" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951195 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerName="init" Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951213 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon-log" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951219 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon-log" Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951226 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerName="dnsmasq-dns" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951232 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerName="dnsmasq-dns" Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951243 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="probe" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951248 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="probe" Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951266 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951272 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon" Jan 23 17:10:40 crc kubenswrapper[4606]: E0123 17:10:40.951280 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="cinder-scheduler" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951286 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="cinder-scheduler" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951446 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="cinder-scheduler" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951462 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-api" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951474 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" containerName="probe" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951488 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" containerName="neutron-httpd" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951504 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon-log" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951521 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c747544b-d43a-46bf-8b8a-dfab974a8788" containerName="dnsmasq-dns" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.951531 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" containerName="horizon" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.952408 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.954770 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.958772 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerStarted","Data":"4082ecc856d5c53335c3987b6647ead17b18604083fc22cea8801c7e904ba5a3"} Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.958941 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="ceilometer-notification-agent" containerID="cri-o://854209c6497f5a4fc7f4f7af90cc7a635a38c412d8d463aa2920e40c72f8e0d7" gracePeriod=30 Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.959042 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="sg-core" containerID="cri-o://44eeade27a61086a37897aeb567f16e935c017b5f994ba41bdfa600bc5105e20" gracePeriod=30 Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.959112 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.959051 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="proxy-httpd" containerID="cri-o://4082ecc856d5c53335c3987b6647ead17b18604083fc22cea8801c7e904ba5a3" gracePeriod=30 Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.984332 4606 scope.go:117] "RemoveContainer" containerID="73dd298dd581b5c8c71be4bcd7a660571fdcf22b16ce794580fc0b943c63328a" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.986170 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5ccb8fb66b-52f57" podStartSLOduration=4.072809663 podStartE2EDuration="15.986154594s" podCreationTimestamp="2026-01-23 17:10:25 +0000 UTC" firstStartedPulling="2026-01-23 17:10:26.700905456 +0000 UTC m=+1060.701097862" lastFinishedPulling="2026-01-23 17:10:38.614250377 +0000 UTC m=+1072.614442793" observedRunningTime="2026-01-23 17:10:40.918980832 +0000 UTC m=+1074.919173248" watchObservedRunningTime="2026-01-23 17:10:40.986154594 +0000 UTC m=+1074.986347000" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.988925 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.998961 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.999001 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.999039 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-config-data\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.999059 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-scripts\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.999128 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43fb635-59de-4576-bdf4-9eacebc80a85-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:40 crc kubenswrapper[4606]: I0123 17:10:40.999195 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8w6\" (UniqueName: \"kubernetes.io/projected/a43fb635-59de-4576-bdf4-9eacebc80a85-kube-api-access-qf8w6\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.003109 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d46dd5979-gmzrf" event={"ID":"40720f96-b314-4ba9-8f88-b8b4c33ccb5e","Type":"ContainerStarted","Data":"5abe20d5a0788fafb8b2d321996975706dae3d6ae3a0c2be2cd644cc30216659"} Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.045758 4606 scope.go:117] "RemoveContainer" containerID="08cd2289d6d1fa4c30ac5a838a884f3e8310bb5f8ac14a4ba06c1d2a42e2b029" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.100759 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43fb635-59de-4576-bdf4-9eacebc80a85-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.101069 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8w6\" (UniqueName: \"kubernetes.io/projected/a43fb635-59de-4576-bdf4-9eacebc80a85-kube-api-access-qf8w6\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.101107 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.101123 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.101151 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-config-data\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.101169 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-scripts\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.102787 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a43fb635-59de-4576-bdf4-9eacebc80a85-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.154185 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.158031 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-scripts\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.158641 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.161942 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43fb635-59de-4576-bdf4-9eacebc80a85-config-data\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.172742 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8w6\" (UniqueName: \"kubernetes.io/projected/a43fb635-59de-4576-bdf4-9eacebc80a85-kube-api-access-qf8w6\") pod \"cinder-scheduler-0\" (UID: \"a43fb635-59de-4576-bdf4-9eacebc80a85\") " pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.272920 4606 scope.go:117] "RemoveContainer" containerID="543bfcaac3c348f86359b6cadacb4e251cdce8083b2bf54917046ece0a420471" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.281016 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.401945 4606 scope.go:117] "RemoveContainer" containerID="05af7e4c5a62803403e4c83e43f496365e5314ccd0824e748cfb6311601bc723" Jan 23 17:10:41 crc kubenswrapper[4606]: I0123 17:10:41.538267 4606 scope.go:117] "RemoveContainer" containerID="b7fbbc090f1c121453cf9a1eed2f5beb80e10ed6d5775dcc94167040f94b078d" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.012486 4606 generic.go:334] "Generic (PLEG): container finished" podID="a5feb913-77c6-425c-9018-de42010da64e" containerID="4082ecc856d5c53335c3987b6647ead17b18604083fc22cea8801c7e904ba5a3" exitCode=0 Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.013155 4606 generic.go:334] "Generic (PLEG): container finished" podID="a5feb913-77c6-425c-9018-de42010da64e" containerID="44eeade27a61086a37897aeb567f16e935c017b5f994ba41bdfa600bc5105e20" exitCode=2 Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.012525 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerDied","Data":"4082ecc856d5c53335c3987b6647ead17b18604083fc22cea8801c7e904ba5a3"} Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.013233 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerDied","Data":"44eeade27a61086a37897aeb567f16e935c017b5f994ba41bdfa600bc5105e20"} Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.023526 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6d46dd5979-gmzrf" event={"ID":"40720f96-b314-4ba9-8f88-b8b4c33ccb5e","Type":"ContainerStarted","Data":"1b08e1b227657a791a5212ba4ca0159f80e9b9d8e2af4b8b033a688ad4883cde"} Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.028261 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4bdcdf98-bmdhd" event={"ID":"f71cd544-6079-402d-a59f-0a3d4fb1d1f7","Type":"ContainerStarted","Data":"3be67b5c0c9785fbe13fff6207329af67a53d1f52eb35535d8192063e10cea62"} Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.028312 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d4bdcdf98-bmdhd" event={"ID":"f71cd544-6079-402d-a59f-0a3d4fb1d1f7","Type":"ContainerStarted","Data":"d20c327a3b9a8d3a237a066831b3868f2be64086d2e5754267e0a85df85422d3"} Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.028353 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.028440 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.030781 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffcd54777-hkxkz" event={"ID":"7a926781-d4e8-4d2a-b8fd-72d4d6273927","Type":"ContainerStarted","Data":"1e6d2f0873f546c45dd11ae22d063109abfa14d1883efffe6886ba6a9a00d77e"} Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.030823 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffcd54777-hkxkz" event={"ID":"7a926781-d4e8-4d2a-b8fd-72d4d6273927","Type":"ContainerStarted","Data":"c083dd071d5c19af175a312a74b4401f668ebdb220b72ee49ca9efdaed95bc9b"} Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.031643 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.059065 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6d46dd5979-gmzrf" podStartSLOduration=5.573061987 podStartE2EDuration="17.059030684s" podCreationTimestamp="2026-01-23 17:10:25 +0000 UTC" firstStartedPulling="2026-01-23 17:10:27.122091863 +0000 UTC m=+1061.122284269" lastFinishedPulling="2026-01-23 17:10:38.60806056 +0000 UTC m=+1072.608252966" observedRunningTime="2026-01-23 17:10:42.058345157 +0000 UTC m=+1076.058537583" watchObservedRunningTime="2026-01-23 17:10:42.059030684 +0000 UTC m=+1076.059223090" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.142691 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6ffcd54777-hkxkz" podStartSLOduration=12.142671971 podStartE2EDuration="12.142671971s" podCreationTimestamp="2026-01-23 17:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:42.093965484 +0000 UTC m=+1076.094157910" watchObservedRunningTime="2026-01-23 17:10:42.142671971 +0000 UTC m=+1076.142864377" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.165889 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d4bdcdf98-bmdhd" podStartSLOduration=13.165864204 podStartE2EDuration="13.165864204s" podCreationTimestamp="2026-01-23 17:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:42.136068704 +0000 UTC m=+1076.136261110" watchObservedRunningTime="2026-01-23 17:10:42.165864204 +0000 UTC m=+1076.166056610" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.209530 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.377065 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e50cc4c-a763-4fd7-b227-2f1afbfcaaea" path="/var/lib/kubelet/pods/0e50cc4c-a763-4fd7-b227-2f1afbfcaaea/volumes" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.378263 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bd9f99-2697-4e95-91cb-18d490d4b19f" path="/var/lib/kubelet/pods/20bd9f99-2697-4e95-91cb-18d490d4b19f/volumes" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.379206 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c747544b-d43a-46bf-8b8a-dfab974a8788" path="/var/lib/kubelet/pods/c747544b-d43a-46bf-8b8a-dfab974a8788/volumes" Jan 23 17:10:42 crc kubenswrapper[4606]: I0123 17:10:42.380437 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e23f5510-b5f6-42a1-b751-caa502fba254" path="/var/lib/kubelet/pods/e23f5510-b5f6-42a1-b751-caa502fba254/volumes" Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.057881 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a43fb635-59de-4576-bdf4-9eacebc80a85","Type":"ContainerStarted","Data":"96d8eeb1a3a1a7d5a6a552c155871ac35c37dde6813ff08cb59de612c0a9edcd"} Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.439773 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.439858 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.440570 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"3acaf29972f41a7b4f105c72ea9d73e660deeda3e9c45ee06e8af4bfd1e970f7"} pod="openstack/horizon-5b9775877b-mcc82" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.440624 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" containerID="cri-o://3acaf29972f41a7b4f105c72ea9d73e660deeda3e9c45ee06e8af4bfd1e970f7" gracePeriod=30 Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.614935 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977896f6-qrk2s" podUID="59fc19de-8c69-43e4-bf0d-1cb8ea9a9120" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.615027 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.615894 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ee2e80c065ffe14a9551fa689c4481e4792330ef9d6af9801521b59f7173bb20"} pod="openstack/horizon-57977896f6-qrk2s" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.615966 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57977896f6-qrk2s" podUID="59fc19de-8c69-43e4-bf0d-1cb8ea9a9120" containerName="horizon" containerID="cri-o://ee2e80c065ffe14a9551fa689c4481e4792330ef9d6af9801521b59f7173bb20" gracePeriod=30 Jan 23 17:10:43 crc kubenswrapper[4606]: I0123 17:10:43.892793 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.160:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.080164 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a43fb635-59de-4576-bdf4-9eacebc80a85","Type":"ContainerStarted","Data":"720145557e6448bd34ede78c5a1d1e667d284ae8ff9b514bfd6b26110e488b7f"} Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.084878 4606 generic.go:334] "Generic (PLEG): container finished" podID="a5feb913-77c6-425c-9018-de42010da64e" containerID="854209c6497f5a4fc7f4f7af90cc7a635a38c412d8d463aa2920e40c72f8e0d7" exitCode=0 Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.084955 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerDied","Data":"854209c6497f5a4fc7f4f7af90cc7a635a38c412d8d463aa2920e40c72f8e0d7"} Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.530074 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.596261 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-combined-ca-bundle\") pod \"a5feb913-77c6-425c-9018-de42010da64e\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.596328 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-sg-core-conf-yaml\") pod \"a5feb913-77c6-425c-9018-de42010da64e\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.596409 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-config-data\") pod \"a5feb913-77c6-425c-9018-de42010da64e\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.596439 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2xhr\" (UniqueName: \"kubernetes.io/projected/a5feb913-77c6-425c-9018-de42010da64e-kube-api-access-m2xhr\") pod \"a5feb913-77c6-425c-9018-de42010da64e\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.596476 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-log-httpd\") pod \"a5feb913-77c6-425c-9018-de42010da64e\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.596503 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-run-httpd\") pod \"a5feb913-77c6-425c-9018-de42010da64e\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.596539 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-scripts\") pod \"a5feb913-77c6-425c-9018-de42010da64e\" (UID: \"a5feb913-77c6-425c-9018-de42010da64e\") " Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.600441 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5feb913-77c6-425c-9018-de42010da64e" (UID: "a5feb913-77c6-425c-9018-de42010da64e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.602886 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5feb913-77c6-425c-9018-de42010da64e" (UID: "a5feb913-77c6-425c-9018-de42010da64e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.607136 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5feb913-77c6-425c-9018-de42010da64e-kube-api-access-m2xhr" (OuterVolumeSpecName: "kube-api-access-m2xhr") pod "a5feb913-77c6-425c-9018-de42010da64e" (UID: "a5feb913-77c6-425c-9018-de42010da64e"). InnerVolumeSpecName "kube-api-access-m2xhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.607285 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-scripts" (OuterVolumeSpecName: "scripts") pod "a5feb913-77c6-425c-9018-de42010da64e" (UID: "a5feb913-77c6-425c-9018-de42010da64e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.668464 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5feb913-77c6-425c-9018-de42010da64e" (UID: "a5feb913-77c6-425c-9018-de42010da64e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.703549 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2xhr\" (UniqueName: \"kubernetes.io/projected/a5feb913-77c6-425c-9018-de42010da64e-kube-api-access-m2xhr\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.703604 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.703617 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feb913-77c6-425c-9018-de42010da64e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.703628 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.703639 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.745793 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-config-data" (OuterVolumeSpecName: "config-data") pod "a5feb913-77c6-425c-9018-de42010da64e" (UID: "a5feb913-77c6-425c-9018-de42010da64e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.805715 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5feb913-77c6-425c-9018-de42010da64e" (UID: "a5feb913-77c6-425c-9018-de42010da64e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.805855 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:44 crc kubenswrapper[4606]: I0123 17:10:44.907963 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feb913-77c6-425c-9018-de42010da64e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.095648 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a43fb635-59de-4576-bdf4-9eacebc80a85","Type":"ContainerStarted","Data":"ef7f6289ec4739956da673f58462d580bc931240a3a7e40f43d733f6033d6003"} Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.099852 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feb913-77c6-425c-9018-de42010da64e","Type":"ContainerDied","Data":"eba33893efd034eb9f3b29b199abb8931727696f2ca306c287781674b7308b49"} Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.099906 4606 scope.go:117] "RemoveContainer" containerID="4082ecc856d5c53335c3987b6647ead17b18604083fc22cea8801c7e904ba5a3" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.100110 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.130197 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.130180302 podStartE2EDuration="5.130180302s" podCreationTimestamp="2026-01-23 17:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:10:45.12771946 +0000 UTC m=+1079.127911876" watchObservedRunningTime="2026-01-23 17:10:45.130180302 +0000 UTC m=+1079.130372708" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.174108 4606 scope.go:117] "RemoveContainer" containerID="44eeade27a61086a37897aeb567f16e935c017b5f994ba41bdfa600bc5105e20" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.227262 4606 scope.go:117] "RemoveContainer" containerID="854209c6497f5a4fc7f4f7af90cc7a635a38c412d8d463aa2920e40c72f8e0d7" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.244792 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.297892 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.302887 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:10:45 crc kubenswrapper[4606]: E0123 17:10:45.303369 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="sg-core" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.303392 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="sg-core" Jan 23 17:10:45 crc kubenswrapper[4606]: E0123 17:10:45.303420 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="proxy-httpd" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.303438 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="proxy-httpd" Jan 23 17:10:45 crc kubenswrapper[4606]: E0123 17:10:45.303460 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="ceilometer-notification-agent" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.303467 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="ceilometer-notification-agent" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.303682 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="proxy-httpd" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.303712 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="ceilometer-notification-agent" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.303731 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5feb913-77c6-425c-9018-de42010da64e" containerName="sg-core" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.313163 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.317674 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.322119 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.345358 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.424613 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.424675 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-log-httpd\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.424700 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-scripts\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.424765 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-config-data\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.424824 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-run-httpd\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.424873 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.425012 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwhc\" (UniqueName: \"kubernetes.io/projected/81ff9b5e-713c-48e9-901a-23d3f0236653-kube-api-access-qnwhc\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.526475 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.526889 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-log-httpd\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.526924 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-scripts\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.526982 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-config-data\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.527114 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-run-httpd\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.527173 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.527404 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnwhc\" (UniqueName: \"kubernetes.io/projected/81ff9b5e-713c-48e9-901a-23d3f0236653-kube-api-access-qnwhc\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.527403 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-log-httpd\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.527678 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-run-httpd\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.532857 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.534186 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-scripts\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.534504 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-config-data\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.544954 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.545892 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.554081 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-876fbb8bd-6zlfb" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.570849 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnwhc\" (UniqueName: \"kubernetes.io/projected/81ff9b5e-713c-48e9-901a-23d3f0236653-kube-api-access-qnwhc\") pod \"ceilometer-0\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " pod="openstack/ceilometer-0" Jan 23 17:10:45 crc kubenswrapper[4606]: I0123 17:10:45.636426 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:10:46 crc kubenswrapper[4606]: I0123 17:10:46.281957 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 17:10:46 crc kubenswrapper[4606]: I0123 17:10:46.309294 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:10:46 crc kubenswrapper[4606]: I0123 17:10:46.390057 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5feb913-77c6-425c-9018-de42010da64e" path="/var/lib/kubelet/pods/a5feb913-77c6-425c-9018-de42010da64e/volumes" Jan 23 17:10:47 crc kubenswrapper[4606]: I0123 17:10:47.128036 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerStarted","Data":"211cf480863fb2776aeea5a43db2cc20521a7076d1b7744ddf95e364110918d6"} Jan 23 17:10:48 crc kubenswrapper[4606]: I0123 17:10:48.139310 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerStarted","Data":"23d4bb3638a1b150d36a88d8223be919f00a173348795755da2311fb18c91d1e"} Jan 23 17:10:49 crc kubenswrapper[4606]: I0123 17:10:49.149246 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerStarted","Data":"29630b6b717ac4bf6a388560951242a1276a5c3ecc7632820e6f9fc305e5481f"} Jan 23 17:10:49 crc kubenswrapper[4606]: I0123 17:10:49.150297 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerStarted","Data":"99c675e306fde370fdbff23cbd107294ebda3eb930259189eeeeb7bfac643f8d"} Jan 23 17:10:50 crc kubenswrapper[4606]: I0123 17:10:50.159887 4606 generic.go:334] "Generic (PLEG): container finished" podID="1dd13584-a1b9-4891-85a7-01afd519f746" containerID="3acaf29972f41a7b4f105c72ea9d73e660deeda3e9c45ee06e8af4bfd1e970f7" exitCode=0 Jan 23 17:10:50 crc kubenswrapper[4606]: I0123 17:10:50.159971 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerDied","Data":"3acaf29972f41a7b4f105c72ea9d73e660deeda3e9c45ee06e8af4bfd1e970f7"} Jan 23 17:10:50 crc kubenswrapper[4606]: I0123 17:10:50.166794 4606 generic.go:334] "Generic (PLEG): container finished" podID="59fc19de-8c69-43e4-bf0d-1cb8ea9a9120" containerID="ee2e80c065ffe14a9551fa689c4481e4792330ef9d6af9801521b59f7173bb20" exitCode=0 Jan 23 17:10:50 crc kubenswrapper[4606]: I0123 17:10:50.166861 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977896f6-qrk2s" event={"ID":"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120","Type":"ContainerDied","Data":"ee2e80c065ffe14a9551fa689c4481e4792330ef9d6af9801521b59f7173bb20"} Jan 23 17:10:50 crc kubenswrapper[4606]: I0123 17:10:50.830059 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-57c54b4584-qk24c" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.206539 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerStarted","Data":"e90a8d5e00adaf8d666cda264fef91b8943b952d0cc9298acd9c7b164f5b85c2"} Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.216572 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerStarted","Data":"98131baa715192c92d8b7c9263f391bda9e367804ba10b260b174b52beacc719"} Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.216627 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.222003 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57977896f6-qrk2s" event={"ID":"59fc19de-8c69-43e4-bf0d-1cb8ea9a9120","Type":"ContainerStarted","Data":"9db12bedb5d0e574035e37f42e9bf3c4bbc38bb62a561203f60345c4ff17f5d5"} Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.254868 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.287809072 podStartE2EDuration="6.254850343s" podCreationTimestamp="2026-01-23 17:10:45 +0000 UTC" firstStartedPulling="2026-01-23 17:10:46.323532767 +0000 UTC m=+1080.323725173" lastFinishedPulling="2026-01-23 17:10:50.290574038 +0000 UTC m=+1084.290766444" observedRunningTime="2026-01-23 17:10:51.25429763 +0000 UTC m=+1085.254490036" watchObservedRunningTime="2026-01-23 17:10:51.254850343 +0000 UTC m=+1085.255042749" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.590342 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.591446 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.593278 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.593437 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2c6nx" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.594678 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.612492 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.668073 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-openstack-config\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.668173 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc9qs\" (UniqueName: \"kubernetes.io/projected/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-kube-api-access-tc9qs\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.668241 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-openstack-config-secret\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.668302 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.769318 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc9qs\" (UniqueName: \"kubernetes.io/projected/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-kube-api-access-tc9qs\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.769388 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-openstack-config-secret\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.769435 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.769494 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-openstack-config\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.770261 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-openstack-config\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.780319 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-openstack-config-secret\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.790312 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.794493 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc9qs\" (UniqueName: \"kubernetes.io/projected/83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e-kube-api-access-tc9qs\") pod \"openstackclient\" (UID: \"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e\") " pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.922387 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 17:10:51 crc kubenswrapper[4606]: I0123 17:10:51.932552 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:52 crc kubenswrapper[4606]: I0123 17:10:52.117975 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 17:10:52 crc kubenswrapper[4606]: I0123 17:10:52.229713 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d4bdcdf98-bmdhd" Jan 23 17:10:52 crc kubenswrapper[4606]: I0123 17:10:52.309417 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5c8f9f6b46-pqjgt"] Jan 23 17:10:52 crc kubenswrapper[4606]: I0123 17:10:52.309648 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api-log" containerID="cri-o://98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c" gracePeriod=30 Jan 23 17:10:52 crc kubenswrapper[4606]: I0123 17:10:52.309884 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api" containerID="cri-o://f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c" gracePeriod=30 Jan 23 17:10:52 crc kubenswrapper[4606]: I0123 17:10:52.629334 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 17:10:52 crc kubenswrapper[4606]: W0123 17:10:52.634679 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83d4b73c_dbe9_4e0a_96b7_a5c7f5b16c6e.slice/crio-b013a99c3097b6ed806debb1a47fa770b3f52f291aad234a9816f6a3d3a70a61 WatchSource:0}: Error finding container b013a99c3097b6ed806debb1a47fa770b3f52f291aad234a9816f6a3d3a70a61: Status 404 returned error can't find the container with id b013a99c3097b6ed806debb1a47fa770b3f52f291aad234a9816f6a3d3a70a61 Jan 23 17:10:53 crc kubenswrapper[4606]: I0123 17:10:53.296120 4606 generic.go:334] "Generic (PLEG): container finished" podID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerID="98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c" exitCode=143 Jan 23 17:10:53 crc kubenswrapper[4606]: I0123 17:10:53.296197 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" event={"ID":"342b1b35-f2a3-48a0-9342-23f3d0fa9907","Type":"ContainerDied","Data":"98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c"} Jan 23 17:10:53 crc kubenswrapper[4606]: I0123 17:10:53.302730 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e","Type":"ContainerStarted","Data":"b013a99c3097b6ed806debb1a47fa770b3f52f291aad234a9816f6a3d3a70a61"} Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.141872 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.261045 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-combined-ca-bundle\") pod \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.261170 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data-custom\") pod \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.261197 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data\") pod \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.261223 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342b1b35-f2a3-48a0-9342-23f3d0fa9907-logs\") pod \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.261309 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzzlj\" (UniqueName: \"kubernetes.io/projected/342b1b35-f2a3-48a0-9342-23f3d0fa9907-kube-api-access-dzzlj\") pod \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\" (UID: \"342b1b35-f2a3-48a0-9342-23f3d0fa9907\") " Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.262172 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/342b1b35-f2a3-48a0-9342-23f3d0fa9907-logs" (OuterVolumeSpecName: "logs") pod "342b1b35-f2a3-48a0-9342-23f3d0fa9907" (UID: "342b1b35-f2a3-48a0-9342-23f3d0fa9907"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.276844 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342b1b35-f2a3-48a0-9342-23f3d0fa9907-kube-api-access-dzzlj" (OuterVolumeSpecName: "kube-api-access-dzzlj") pod "342b1b35-f2a3-48a0-9342-23f3d0fa9907" (UID: "342b1b35-f2a3-48a0-9342-23f3d0fa9907"). InnerVolumeSpecName "kube-api-access-dzzlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.308040 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "342b1b35-f2a3-48a0-9342-23f3d0fa9907" (UID: "342b1b35-f2a3-48a0-9342-23f3d0fa9907"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.313862 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "342b1b35-f2a3-48a0-9342-23f3d0fa9907" (UID: "342b1b35-f2a3-48a0-9342-23f3d0fa9907"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.349930 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data" (OuterVolumeSpecName: "config-data") pod "342b1b35-f2a3-48a0-9342-23f3d0fa9907" (UID: "342b1b35-f2a3-48a0-9342-23f3d0fa9907"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.360877 4606 generic.go:334] "Generic (PLEG): container finished" podID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerID="f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c" exitCode=0 Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.360939 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" event={"ID":"342b1b35-f2a3-48a0-9342-23f3d0fa9907","Type":"ContainerDied","Data":"f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c"} Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.360976 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" event={"ID":"342b1b35-f2a3-48a0-9342-23f3d0fa9907","Type":"ContainerDied","Data":"7f3cd85a08251ed7aea585d240ae41e88f24853d50b9d16b9db406a4e68dfc21"} Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.360998 4606 scope.go:117] "RemoveContainer" containerID="f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.361167 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c8f9f6b46-pqjgt" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.363229 4606 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.363252 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.363261 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342b1b35-f2a3-48a0-9342-23f3d0fa9907-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.363270 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzzlj\" (UniqueName: \"kubernetes.io/projected/342b1b35-f2a3-48a0-9342-23f3d0fa9907-kube-api-access-dzzlj\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.363280 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342b1b35-f2a3-48a0-9342-23f3d0fa9907-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.419407 4606 scope.go:117] "RemoveContainer" containerID="98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.441513 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5c8f9f6b46-pqjgt"] Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.449291 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5c8f9f6b46-pqjgt"] Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.454663 4606 scope.go:117] "RemoveContainer" containerID="f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c" Jan 23 17:10:56 crc kubenswrapper[4606]: E0123 17:10:56.459667 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c\": container with ID starting with f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c not found: ID does not exist" containerID="f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.459699 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c"} err="failed to get container status \"f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c\": rpc error: code = NotFound desc = could not find container \"f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c\": container with ID starting with f4ca6c19781ede331f7d00b30fcd85371edad8393dfde330eb51e4bf8dd6940c not found: ID does not exist" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.459720 4606 scope.go:117] "RemoveContainer" containerID="98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c" Jan 23 17:10:56 crc kubenswrapper[4606]: E0123 17:10:56.464677 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c\": container with ID starting with 98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c not found: ID does not exist" containerID="98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c" Jan 23 17:10:56 crc kubenswrapper[4606]: I0123 17:10:56.464712 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c"} err="failed to get container status \"98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c\": rpc error: code = NotFound desc = could not find container \"98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c\": container with ID starting with 98cf3c1befe6d7e5b852d2d36a9d6a2586bade571cfdcc6d7868f0a7fa67be8c not found: ID does not exist" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.816773 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6786d7bbc5-25lfl"] Jan 23 17:10:57 crc kubenswrapper[4606]: E0123 17:10:57.817550 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api-log" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.817570 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api-log" Jan 23 17:10:57 crc kubenswrapper[4606]: E0123 17:10:57.817622 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.817631 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.817855 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api-log" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.817880 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" containerName="barbican-api" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.819045 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.822962 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.823581 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.839514 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.847435 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6786d7bbc5-25lfl"] Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894239 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c3e688-60f4-483e-962b-9e46bd25cfb7-run-httpd\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894314 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10c3e688-60f4-483e-962b-9e46bd25cfb7-etc-swift\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894333 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-internal-tls-certs\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894362 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-public-tls-certs\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894393 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c3e688-60f4-483e-962b-9e46bd25cfb7-log-httpd\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894413 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-config-data\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894472 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvc7\" (UniqueName: \"kubernetes.io/projected/10c3e688-60f4-483e-962b-9e46bd25cfb7-kube-api-access-kxvc7\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.894500 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-combined-ca-bundle\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.998695 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c3e688-60f4-483e-962b-9e46bd25cfb7-run-httpd\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.998815 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10c3e688-60f4-483e-962b-9e46bd25cfb7-etc-swift\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.998837 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-internal-tls-certs\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.998867 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-public-tls-certs\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.998925 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c3e688-60f4-483e-962b-9e46bd25cfb7-log-httpd\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.998951 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-config-data\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.999057 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxvc7\" (UniqueName: \"kubernetes.io/projected/10c3e688-60f4-483e-962b-9e46bd25cfb7-kube-api-access-kxvc7\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.999092 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-combined-ca-bundle\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:57 crc kubenswrapper[4606]: I0123 17:10:57.999170 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c3e688-60f4-483e-962b-9e46bd25cfb7-run-httpd\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.000469 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10c3e688-60f4-483e-962b-9e46bd25cfb7-log-httpd\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.008125 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-combined-ca-bundle\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.016963 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-public-tls-certs\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.022084 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-internal-tls-certs\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.022090 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxvc7\" (UniqueName: \"kubernetes.io/projected/10c3e688-60f4-483e-962b-9e46bd25cfb7-kube-api-access-kxvc7\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.022411 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10c3e688-60f4-483e-962b-9e46bd25cfb7-etc-swift\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.023832 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c3e688-60f4-483e-962b-9e46bd25cfb7-config-data\") pod \"swift-proxy-6786d7bbc5-25lfl\" (UID: \"10c3e688-60f4-483e-962b-9e46bd25cfb7\") " pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.140097 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.389880 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="342b1b35-f2a3-48a0-9342-23f3d0fa9907" path="/var/lib/kubelet/pods/342b1b35-f2a3-48a0-9342-23f3d0fa9907/volumes" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.435941 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.437015 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.608905 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.608948 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:10:58 crc kubenswrapper[4606]: I0123 17:10:58.860539 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6786d7bbc5-25lfl"] Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.462948 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786d7bbc5-25lfl" event={"ID":"10c3e688-60f4-483e-962b-9e46bd25cfb7","Type":"ContainerStarted","Data":"1072a800a689f3d94dc1bc1ba119c125bc6c2832d2852ea9695179b9621b86cf"} Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.463209 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786d7bbc5-25lfl" event={"ID":"10c3e688-60f4-483e-962b-9e46bd25cfb7","Type":"ContainerStarted","Data":"024df1704d63a90e204af72658c4836eb3795aa2d4e17b8da886512d3656f256"} Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.487383 4606 generic.go:334] "Generic (PLEG): container finished" podID="239b6b42-d315-47cf-8a72-e12639318fe7" containerID="d8cf6aac1e30e29f9e0e8c64b6019f889faca9b7c04a7205c7c4a8a914e1edcc" exitCode=137 Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.487627 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"239b6b42-d315-47cf-8a72-e12639318fe7","Type":"ContainerDied","Data":"d8cf6aac1e30e29f9e0e8c64b6019f889faca9b7c04a7205c7c4a8a914e1edcc"} Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.683603 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.748609 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-scripts\") pod \"239b6b42-d315-47cf-8a72-e12639318fe7\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.748684 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239b6b42-d315-47cf-8a72-e12639318fe7-logs\") pod \"239b6b42-d315-47cf-8a72-e12639318fe7\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.748744 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm5lp\" (UniqueName: \"kubernetes.io/projected/239b6b42-d315-47cf-8a72-e12639318fe7-kube-api-access-nm5lp\") pod \"239b6b42-d315-47cf-8a72-e12639318fe7\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.748771 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data-custom\") pod \"239b6b42-d315-47cf-8a72-e12639318fe7\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.748809 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data\") pod \"239b6b42-d315-47cf-8a72-e12639318fe7\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.748910 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/239b6b42-d315-47cf-8a72-e12639318fe7-etc-machine-id\") pod \"239b6b42-d315-47cf-8a72-e12639318fe7\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.749030 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-combined-ca-bundle\") pod \"239b6b42-d315-47cf-8a72-e12639318fe7\" (UID: \"239b6b42-d315-47cf-8a72-e12639318fe7\") " Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.756268 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/239b6b42-d315-47cf-8a72-e12639318fe7-logs" (OuterVolumeSpecName: "logs") pod "239b6b42-d315-47cf-8a72-e12639318fe7" (UID: "239b6b42-d315-47cf-8a72-e12639318fe7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.756358 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/239b6b42-d315-47cf-8a72-e12639318fe7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "239b6b42-d315-47cf-8a72-e12639318fe7" (UID: "239b6b42-d315-47cf-8a72-e12639318fe7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.756427 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-scripts" (OuterVolumeSpecName: "scripts") pod "239b6b42-d315-47cf-8a72-e12639318fe7" (UID: "239b6b42-d315-47cf-8a72-e12639318fe7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.768758 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "239b6b42-d315-47cf-8a72-e12639318fe7" (UID: "239b6b42-d315-47cf-8a72-e12639318fe7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.769611 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239b6b42-d315-47cf-8a72-e12639318fe7-kube-api-access-nm5lp" (OuterVolumeSpecName: "kube-api-access-nm5lp") pod "239b6b42-d315-47cf-8a72-e12639318fe7" (UID: "239b6b42-d315-47cf-8a72-e12639318fe7"). InnerVolumeSpecName "kube-api-access-nm5lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.817684 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "239b6b42-d315-47cf-8a72-e12639318fe7" (UID: "239b6b42-d315-47cf-8a72-e12639318fe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.850044 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm5lp\" (UniqueName: \"kubernetes.io/projected/239b6b42-d315-47cf-8a72-e12639318fe7-kube-api-access-nm5lp\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.850067 4606 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.850076 4606 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/239b6b42-d315-47cf-8a72-e12639318fe7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.850084 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.850092 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.850101 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/239b6b42-d315-47cf-8a72-e12639318fe7-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.881665 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data" (OuterVolumeSpecName: "config-data") pod "239b6b42-d315-47cf-8a72-e12639318fe7" (UID: "239b6b42-d315-47cf-8a72-e12639318fe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:10:59 crc kubenswrapper[4606]: I0123 17:10:59.951544 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/239b6b42-d315-47cf-8a72-e12639318fe7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.506742 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.507573 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"239b6b42-d315-47cf-8a72-e12639318fe7","Type":"ContainerDied","Data":"39ae4c98dc41f46f8bcfcaa2d6a38036c82213043e79f2423741394b296bf62d"} Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.507637 4606 scope.go:117] "RemoveContainer" containerID="d8cf6aac1e30e29f9e0e8c64b6019f889faca9b7c04a7205c7c4a8a914e1edcc" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.520649 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6786d7bbc5-25lfl" event={"ID":"10c3e688-60f4-483e-962b-9e46bd25cfb7","Type":"ContainerStarted","Data":"328fb3be087523c6e280776f26fde63fa01d2ec2a1b6410b8107080889f880dd"} Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.520795 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.542288 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.549732 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.562894 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6786d7bbc5-25lfl" podStartSLOduration=3.5628767679999997 podStartE2EDuration="3.562876768s" podCreationTimestamp="2026-01-23 17:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:00.559962495 +0000 UTC m=+1094.560154901" watchObservedRunningTime="2026-01-23 17:11:00.562876768 +0000 UTC m=+1094.563069174" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.590697 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:11:00 crc kubenswrapper[4606]: E0123 17:11:00.591061 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.591077 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api" Jan 23 17:11:00 crc kubenswrapper[4606]: E0123 17:11:00.591104 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api-log" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.591111 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api-log" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.591268 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.591287 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" containerName="cinder-api-log" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.592165 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.597431 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.597732 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.597757 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.617501 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.745885 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.746221 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-central-agent" containerID="cri-o://23d4bb3638a1b150d36a88d8223be919f00a173348795755da2311fb18c91d1e" gracePeriod=30 Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.746973 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="proxy-httpd" containerID="cri-o://98131baa715192c92d8b7c9263f391bda9e367804ba10b260b174b52beacc719" gracePeriod=30 Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.747033 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="sg-core" containerID="cri-o://29630b6b717ac4bf6a388560951242a1276a5c3ecc7632820e6f9fc305e5481f" gracePeriod=30 Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.747064 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-notification-agent" containerID="cri-o://99c675e306fde370fdbff23cbd107294ebda3eb930259189eeeeb7bfac643f8d" gracePeriod=30 Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.764852 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.764915 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-logs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.764973 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rcw\" (UniqueName: \"kubernetes.io/projected/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-kube-api-access-67rcw\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.765017 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-scripts\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.765038 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.765072 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-config-data-custom\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.765092 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.765151 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-config-data\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.765174 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.773518 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": EOF" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867156 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-scripts\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867197 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867229 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-config-data-custom\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867248 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867292 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-config-data\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867316 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867354 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867376 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-logs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867411 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rcw\" (UniqueName: \"kubernetes.io/projected/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-kube-api-access-67rcw\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.867699 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.869125 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-logs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.874605 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-scripts\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.875535 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.876051 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.876514 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.882486 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-config-data\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.888189 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-config-data-custom\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.890707 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rcw\" (UniqueName: \"kubernetes.io/projected/b26a0ac0-2f7e-48a0-aebd-20f7388d0634-kube-api-access-67rcw\") pod \"cinder-api-0\" (UID: \"b26a0ac0-2f7e-48a0-aebd-20f7388d0634\") " pod="openstack/cinder-api-0" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.921098 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6ffcd54777-hkxkz" Jan 23 17:11:00 crc kubenswrapper[4606]: I0123 17:11:00.953442 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.007443 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fc6b8db9b-nkqzh"] Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.007974 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fc6b8db9b-nkqzh" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-api" containerID="cri-o://193e5a950c77c71b0baa7b145b272491c50b6361f2795946740d3b995aa28fbc" gracePeriod=30 Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.008107 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fc6b8db9b-nkqzh" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-httpd" containerID="cri-o://bba25af25e90eff5fd8683e770b611e570b0647ea5ba8d01659f310ddfb83cf3" gracePeriod=30 Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.536953 4606 generic.go:334] "Generic (PLEG): container finished" podID="7e867f98-5330-473c-aa2f-4c5863e70688" containerID="bba25af25e90eff5fd8683e770b611e570b0647ea5ba8d01659f310ddfb83cf3" exitCode=0 Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.537010 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc6b8db9b-nkqzh" event={"ID":"7e867f98-5330-473c-aa2f-4c5863e70688","Type":"ContainerDied","Data":"bba25af25e90eff5fd8683e770b611e570b0647ea5ba8d01659f310ddfb83cf3"} Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.550165 4606 generic.go:334] "Generic (PLEG): container finished" podID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerID="98131baa715192c92d8b7c9263f391bda9e367804ba10b260b174b52beacc719" exitCode=0 Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.550200 4606 generic.go:334] "Generic (PLEG): container finished" podID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerID="29630b6b717ac4bf6a388560951242a1276a5c3ecc7632820e6f9fc305e5481f" exitCode=2 Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.550208 4606 generic.go:334] "Generic (PLEG): container finished" podID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerID="99c675e306fde370fdbff23cbd107294ebda3eb930259189eeeeb7bfac643f8d" exitCode=0 Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.550753 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerDied","Data":"98131baa715192c92d8b7c9263f391bda9e367804ba10b260b174b52beacc719"} Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.550811 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerDied","Data":"29630b6b717ac4bf6a388560951242a1276a5c3ecc7632820e6f9fc305e5481f"} Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.550824 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerDied","Data":"99c675e306fde370fdbff23cbd107294ebda3eb930259189eeeeb7bfac643f8d"} Jan 23 17:11:01 crc kubenswrapper[4606]: I0123 17:11:01.550876 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:11:02 crc kubenswrapper[4606]: I0123 17:11:02.387010 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239b6b42-d315-47cf-8a72-e12639318fe7" path="/var/lib/kubelet/pods/239b6b42-d315-47cf-8a72-e12639318fe7/volumes" Jan 23 17:11:02 crc kubenswrapper[4606]: I0123 17:11:02.565498 4606 generic.go:334] "Generic (PLEG): container finished" podID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerID="23d4bb3638a1b150d36a88d8223be919f00a173348795755da2311fb18c91d1e" exitCode=0 Jan 23 17:11:02 crc kubenswrapper[4606]: I0123 17:11:02.565577 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerDied","Data":"23d4bb3638a1b150d36a88d8223be919f00a173348795755da2311fb18c91d1e"} Jan 23 17:11:03 crc kubenswrapper[4606]: I0123 17:11:03.591579 4606 generic.go:334] "Generic (PLEG): container finished" podID="7e867f98-5330-473c-aa2f-4c5863e70688" containerID="193e5a950c77c71b0baa7b145b272491c50b6361f2795946740d3b995aa28fbc" exitCode=0 Jan 23 17:11:03 crc kubenswrapper[4606]: I0123 17:11:03.591693 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc6b8db9b-nkqzh" event={"ID":"7e867f98-5330-473c-aa2f-4c5863e70688","Type":"ContainerDied","Data":"193e5a950c77c71b0baa7b145b272491c50b6361f2795946740d3b995aa28fbc"} Jan 23 17:11:03 crc kubenswrapper[4606]: E0123 17:11:03.634730 4606 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e867f98_5330_473c_aa2f_4c5863e70688.slice/crio-conmon-193e5a950c77c71b0baa7b145b272491c50b6361f2795946740d3b995aa28fbc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e867f98_5330_473c_aa2f_4c5863e70688.slice/crio-193e5a950c77c71b0baa7b145b272491c50b6361f2795946740d3b995aa28fbc.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:11:05 crc kubenswrapper[4606]: I0123 17:11:05.238743 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:11:05 crc kubenswrapper[4606]: I0123 17:11:05.239282 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-log" containerID="cri-o://81202c239191192d6e7505f82860b63b8c62da4215d76afd1856e1514674ec88" gracePeriod=30 Jan 23 17:11:05 crc kubenswrapper[4606]: I0123 17:11:05.239718 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-httpd" containerID="cri-o://cd6b97d70e4add71caeb23954067f81d25774803be51d1e8dba0badb66d7b904" gracePeriod=30 Jan 23 17:11:05 crc kubenswrapper[4606]: I0123 17:11:05.608863 4606 generic.go:334] "Generic (PLEG): container finished" podID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerID="81202c239191192d6e7505f82860b63b8c62da4215d76afd1856e1514674ec88" exitCode=143 Jan 23 17:11:05 crc kubenswrapper[4606]: I0123 17:11:05.608905 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c1be0c7-7122-4d22-abfc-8335101f93d7","Type":"ContainerDied","Data":"81202c239191192d6e7505f82860b63b8c62da4215d76afd1856e1514674ec88"} Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.173292 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-c6g8c"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.174609 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.202982 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-c6g8c"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.284806 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cx5vj"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.285916 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.299306 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c34151-9d7b-4d74-9dcd-ef62daa52555-operator-scripts\") pod \"nova-api-db-create-c6g8c\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.299466 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh22q\" (UniqueName: \"kubernetes.io/projected/b8c34151-9d7b-4d74-9dcd-ef62daa52555-kube-api-access-nh22q\") pod \"nova-api-db-create-c6g8c\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.356608 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cx5vj"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.406684 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c34151-9d7b-4d74-9dcd-ef62daa52555-operator-scripts\") pod \"nova-api-db-create-c6g8c\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.406738 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5656c482-3ac8-450a-b302-9e8329674ca5-operator-scripts\") pod \"nova-cell0-db-create-cx5vj\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.406814 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66g5x\" (UniqueName: \"kubernetes.io/projected/5656c482-3ac8-450a-b302-9e8329674ca5-kube-api-access-66g5x\") pod \"nova-cell0-db-create-cx5vj\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.406838 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh22q\" (UniqueName: \"kubernetes.io/projected/b8c34151-9d7b-4d74-9dcd-ef62daa52555-kube-api-access-nh22q\") pod \"nova-api-db-create-c6g8c\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.407839 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c34151-9d7b-4d74-9dcd-ef62daa52555-operator-scripts\") pod \"nova-api-db-create-c6g8c\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.438788 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5b37-account-create-update-pr4c4"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.443913 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.445746 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.449392 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh22q\" (UniqueName: \"kubernetes.io/projected/b8c34151-9d7b-4d74-9dcd-ef62daa52555-kube-api-access-nh22q\") pod \"nova-api-db-create-c6g8c\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.454461 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5b37-account-create-update-pr4c4"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.490446 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.510980 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66g5x\" (UniqueName: \"kubernetes.io/projected/5656c482-3ac8-450a-b302-9e8329674ca5-kube-api-access-66g5x\") pod \"nova-cell0-db-create-cx5vj\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.511375 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-operator-scripts\") pod \"nova-api-5b37-account-create-update-pr4c4\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.511489 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ks8q\" (UniqueName: \"kubernetes.io/projected/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-kube-api-access-2ks8q\") pod \"nova-api-5b37-account-create-update-pr4c4\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.511692 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5656c482-3ac8-450a-b302-9e8329674ca5-operator-scripts\") pod \"nova-cell0-db-create-cx5vj\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.514892 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5656c482-3ac8-450a-b302-9e8329674ca5-operator-scripts\") pod \"nova-cell0-db-create-cx5vj\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.530417 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66g5x\" (UniqueName: \"kubernetes.io/projected/5656c482-3ac8-450a-b302-9e8329674ca5-kube-api-access-66g5x\") pod \"nova-cell0-db-create-cx5vj\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.562182 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6nkhr"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.563629 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.590142 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6nkhr"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.615942 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.617347 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-operator-scripts\") pod \"nova-api-5b37-account-create-update-pr4c4\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.617383 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ks8q\" (UniqueName: \"kubernetes.io/projected/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-kube-api-access-2ks8q\") pod \"nova-api-5b37-account-create-update-pr4c4\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.617437 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2b6537-c2b4-45ed-86ad-6c1018119db1-operator-scripts\") pod \"nova-cell1-db-create-6nkhr\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.617462 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbthv\" (UniqueName: \"kubernetes.io/projected/5c2b6537-c2b4-45ed-86ad-6c1018119db1-kube-api-access-vbthv\") pod \"nova-cell1-db-create-6nkhr\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.618241 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-operator-scripts\") pod \"nova-api-5b37-account-create-update-pr4c4\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.633519 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-df02-account-create-update-qqvbw"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.635041 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.637831 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.663326 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ks8q\" (UniqueName: \"kubernetes.io/projected/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-kube-api-access-2ks8q\") pod \"nova-api-5b37-account-create-update-pr4c4\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.709389 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-df02-account-create-update-qqvbw"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.719836 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5j6x\" (UniqueName: \"kubernetes.io/projected/57a24014-72c4-4240-92ac-f0a2cc2956e2-kube-api-access-c5j6x\") pod \"nova-cell0-df02-account-create-update-qqvbw\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.719905 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2b6537-c2b4-45ed-86ad-6c1018119db1-operator-scripts\") pod \"nova-cell1-db-create-6nkhr\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.719929 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a24014-72c4-4240-92ac-f0a2cc2956e2-operator-scripts\") pod \"nova-cell0-df02-account-create-update-qqvbw\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.719955 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbthv\" (UniqueName: \"kubernetes.io/projected/5c2b6537-c2b4-45ed-86ad-6c1018119db1-kube-api-access-vbthv\") pod \"nova-cell1-db-create-6nkhr\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.725878 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2b6537-c2b4-45ed-86ad-6c1018119db1-operator-scripts\") pod \"nova-cell1-db-create-6nkhr\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.750425 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbthv\" (UniqueName: \"kubernetes.io/projected/5c2b6537-c2b4-45ed-86ad-6c1018119db1-kube-api-access-vbthv\") pod \"nova-cell1-db-create-6nkhr\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.800042 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.821938 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5j6x\" (UniqueName: \"kubernetes.io/projected/57a24014-72c4-4240-92ac-f0a2cc2956e2-kube-api-access-c5j6x\") pod \"nova-cell0-df02-account-create-update-qqvbw\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.821985 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a24014-72c4-4240-92ac-f0a2cc2956e2-operator-scripts\") pod \"nova-cell0-df02-account-create-update-qqvbw\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.822748 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a24014-72c4-4240-92ac-f0a2cc2956e2-operator-scripts\") pod \"nova-cell0-df02-account-create-update-qqvbw\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.870569 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5j6x\" (UniqueName: \"kubernetes.io/projected/57a24014-72c4-4240-92ac-f0a2cc2956e2-kube-api-access-c5j6x\") pod \"nova-cell0-df02-account-create-update-qqvbw\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.881439 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-71a8-account-create-update-k7h4q"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.882656 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.887719 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.892990 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-71a8-account-create-update-k7h4q"] Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.923223 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhmb\" (UniqueName: \"kubernetes.io/projected/0722c265-6ff0-4f82-93c9-f803a72040f5-kube-api-access-hmhmb\") pod \"nova-cell1-71a8-account-create-update-k7h4q\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.923273 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0722c265-6ff0-4f82-93c9-f803a72040f5-operator-scripts\") pod \"nova-cell1-71a8-account-create-update-k7h4q\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.936991 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:06 crc kubenswrapper[4606]: I0123 17:11:06.963995 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:07 crc kubenswrapper[4606]: I0123 17:11:07.024366 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhmb\" (UniqueName: \"kubernetes.io/projected/0722c265-6ff0-4f82-93c9-f803a72040f5-kube-api-access-hmhmb\") pod \"nova-cell1-71a8-account-create-update-k7h4q\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:07 crc kubenswrapper[4606]: I0123 17:11:07.024615 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0722c265-6ff0-4f82-93c9-f803a72040f5-operator-scripts\") pod \"nova-cell1-71a8-account-create-update-k7h4q\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:07 crc kubenswrapper[4606]: I0123 17:11:07.025353 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0722c265-6ff0-4f82-93c9-f803a72040f5-operator-scripts\") pod \"nova-cell1-71a8-account-create-update-k7h4q\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:07 crc kubenswrapper[4606]: I0123 17:11:07.043029 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhmb\" (UniqueName: \"kubernetes.io/projected/0722c265-6ff0-4f82-93c9-f803a72040f5-kube-api-access-hmhmb\") pod \"nova-cell1-71a8-account-create-update-k7h4q\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:07 crc kubenswrapper[4606]: I0123 17:11:07.211444 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:08 crc kubenswrapper[4606]: I0123 17:11:08.157783 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:11:08 crc kubenswrapper[4606]: I0123 17:11:08.163639 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6786d7bbc5-25lfl" Jan 23 17:11:08 crc kubenswrapper[4606]: I0123 17:11:08.439042 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 23 17:11:08 crc kubenswrapper[4606]: I0123 17:11:08.610345 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977896f6-qrk2s" podUID="59fc19de-8c69-43e4-bf0d-1cb8ea9a9120" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 23 17:11:08 crc kubenswrapper[4606]: I0123 17:11:08.674066 4606 generic.go:334] "Generic (PLEG): container finished" podID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerID="cd6b97d70e4add71caeb23954067f81d25774803be51d1e8dba0badb66d7b904" exitCode=0 Jan 23 17:11:08 crc kubenswrapper[4606]: I0123 17:11:08.674101 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c1be0c7-7122-4d22-abfc-8335101f93d7","Type":"ContainerDied","Data":"cd6b97d70e4add71caeb23954067f81d25774803be51d1e8dba0badb66d7b904"} Jan 23 17:11:09 crc kubenswrapper[4606]: I0123 17:11:09.267807 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": dial tcp 10.217.0.155:9292: connect: connection refused" Jan 23 17:11:09 crc kubenswrapper[4606]: I0123 17:11:09.268490 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": dial tcp 10.217.0.155:9292: connect: connection refused" Jan 23 17:11:09 crc kubenswrapper[4606]: I0123 17:11:09.919733 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:11:09 crc kubenswrapper[4606]: I0123 17:11:09.920011 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-log" containerID="cri-o://a41585911dbecd7b4c8e4832aa3755c7c7e4ea568ad413a096d90dab83279c34" gracePeriod=30 Jan 23 17:11:09 crc kubenswrapper[4606]: I0123 17:11:09.921098 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-httpd" containerID="cri-o://ae9b3c9445087d6ceed6c9421d26afc2ec8a39b0cb9e5d8ce0c4a68e3a385ea0" gracePeriod=30 Jan 23 17:11:10 crc kubenswrapper[4606]: I0123 17:11:10.692405 4606 generic.go:334] "Generic (PLEG): container finished" podID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerID="a41585911dbecd7b4c8e4832aa3755c7c7e4ea568ad413a096d90dab83279c34" exitCode=143 Jan 23 17:11:10 crc kubenswrapper[4606]: I0123 17:11:10.692501 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"beedd593-20ce-48a2-a997-3ab8b04c373f","Type":"ContainerDied","Data":"a41585911dbecd7b4c8e4832aa3755c7c7e4ea568ad413a096d90dab83279c34"} Jan 23 17:11:14 crc kubenswrapper[4606]: E0123 17:11:14.111662 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 23 17:11:14 crc kubenswrapper[4606]: E0123 17:11:14.112334 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n86h547hfch68bh5dfh6ch659h8dh565h58fh687h7fh68h5f5h554h59h64h8h6dhfbh84h5c9hdbh644h5cbhdbh78h55chcdh5c9h646h575q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tc9qs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:11:14 crc kubenswrapper[4606]: E0123 17:11:14.113524 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.504028 4606 scope.go:117] "RemoveContainer" containerID="fd89c0a4f7e677bd674d591f6849832be13c9ccf8e4292939beccbd739ec576f" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.547794 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.677925 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-sg-core-conf-yaml\") pod \"81ff9b5e-713c-48e9-901a-23d3f0236653\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.677994 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-run-httpd\") pod \"81ff9b5e-713c-48e9-901a-23d3f0236653\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.678039 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-config-data\") pod \"81ff9b5e-713c-48e9-901a-23d3f0236653\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.678173 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-scripts\") pod \"81ff9b5e-713c-48e9-901a-23d3f0236653\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.678295 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-log-httpd\") pod \"81ff9b5e-713c-48e9-901a-23d3f0236653\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.678355 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnwhc\" (UniqueName: \"kubernetes.io/projected/81ff9b5e-713c-48e9-901a-23d3f0236653-kube-api-access-qnwhc\") pod \"81ff9b5e-713c-48e9-901a-23d3f0236653\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.678724 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-combined-ca-bundle\") pod \"81ff9b5e-713c-48e9-901a-23d3f0236653\" (UID: \"81ff9b5e-713c-48e9-901a-23d3f0236653\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.678799 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "81ff9b5e-713c-48e9-901a-23d3f0236653" (UID: "81ff9b5e-713c-48e9-901a-23d3f0236653"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.679359 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.679375 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "81ff9b5e-713c-48e9-901a-23d3f0236653" (UID: "81ff9b5e-713c-48e9-901a-23d3f0236653"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.693580 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-scripts" (OuterVolumeSpecName: "scripts") pod "81ff9b5e-713c-48e9-901a-23d3f0236653" (UID: "81ff9b5e-713c-48e9-901a-23d3f0236653"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.698817 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ff9b5e-713c-48e9-901a-23d3f0236653-kube-api-access-qnwhc" (OuterVolumeSpecName: "kube-api-access-qnwhc") pod "81ff9b5e-713c-48e9-901a-23d3f0236653" (UID: "81ff9b5e-713c-48e9-901a-23d3f0236653"). InnerVolumeSpecName "kube-api-access-qnwhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.768382 4606 generic.go:334] "Generic (PLEG): container finished" podID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerID="ae9b3c9445087d6ceed6c9421d26afc2ec8a39b0cb9e5d8ce0c4a68e3a385ea0" exitCode=0 Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.768439 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"beedd593-20ce-48a2-a997-3ab8b04c373f","Type":"ContainerDied","Data":"ae9b3c9445087d6ceed6c9421d26afc2ec8a39b0cb9e5d8ce0c4a68e3a385ea0"} Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.775150 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "81ff9b5e-713c-48e9-901a-23d3f0236653" (UID: "81ff9b5e-713c-48e9-901a-23d3f0236653"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.781070 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.781093 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81ff9b5e-713c-48e9-901a-23d3f0236653-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.781102 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnwhc\" (UniqueName: \"kubernetes.io/projected/81ff9b5e-713c-48e9-901a-23d3f0236653-kube-api-access-qnwhc\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.781114 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.806790 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.806875 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81ff9b5e-713c-48e9-901a-23d3f0236653","Type":"ContainerDied","Data":"211cf480863fb2776aeea5a43db2cc20521a7076d1b7744ddf95e364110918d6"} Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.806924 4606 scope.go:117] "RemoveContainer" containerID="98131baa715192c92d8b7c9263f391bda9e367804ba10b260b174b52beacc719" Jan 23 17:11:14 crc kubenswrapper[4606]: E0123 17:11:14.814208 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.873947 4606 scope.go:117] "RemoveContainer" containerID="29630b6b717ac4bf6a388560951242a1276a5c3ecc7632820e6f9fc305e5481f" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.891041 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-config-data" (OuterVolumeSpecName: "config-data") pod "81ff9b5e-713c-48e9-901a-23d3f0236653" (UID: "81ff9b5e-713c-48e9-901a-23d3f0236653"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.892134 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.910508 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ff9b5e-713c-48e9-901a-23d3f0236653" (UID: "81ff9b5e-713c-48e9-901a-23d3f0236653"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.947027 4606 scope.go:117] "RemoveContainer" containerID="99c675e306fde370fdbff23cbd107294ebda3eb930259189eeeeb7bfac643f8d" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.984902 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-combined-ca-bundle\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.985195 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-config-data\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.985283 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-logs\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.985392 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6bgp\" (UniqueName: \"kubernetes.io/projected/6c1be0c7-7122-4d22-abfc-8335101f93d7-kube-api-access-n6bgp\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.985564 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-internal-tls-certs\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.985663 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-scripts\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.985853 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.986029 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-httpd-run\") pod \"6c1be0c7-7122-4d22-abfc-8335101f93d7\" (UID: \"6c1be0c7-7122-4d22-abfc-8335101f93d7\") " Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.986817 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.986913 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ff9b5e-713c-48e9-901a-23d3f0236653-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.985086 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.988314 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-logs" (OuterVolumeSpecName: "logs") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.989996 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:14 crc kubenswrapper[4606]: I0123 17:11:14.993341 4606 scope.go:117] "RemoveContainer" containerID="23d4bb3638a1b150d36a88d8223be919f00a173348795755da2311fb18c91d1e" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.008656 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.008714 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-scripts" (OuterVolumeSpecName: "scripts") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.013087 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1be0c7-7122-4d22-abfc-8335101f93d7-kube-api-access-n6bgp" (OuterVolumeSpecName: "kube-api-access-n6bgp") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "kube-api-access-n6bgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.058048 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.088468 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-config-data\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.089904 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-logs\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.089935 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-httpd-run\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090056 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-public-tls-certs\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090123 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-combined-ca-bundle\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090172 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2nv7\" (UniqueName: \"kubernetes.io/projected/beedd593-20ce-48a2-a997-3ab8b04c373f-kube-api-access-m2nv7\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090199 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-scripts\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090213 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"beedd593-20ce-48a2-a997-3ab8b04c373f\" (UID: \"beedd593-20ce-48a2-a997-3ab8b04c373f\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090823 4606 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090848 4606 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090863 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090876 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1be0c7-7122-4d22-abfc-8335101f93d7-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090886 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6bgp\" (UniqueName: \"kubernetes.io/projected/6c1be0c7-7122-4d22-abfc-8335101f93d7-kube-api-access-n6bgp\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.090896 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.092921 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.093172 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-logs" (OuterVolumeSpecName: "logs") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.107762 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beedd593-20ce-48a2-a997-3ab8b04c373f-kube-api-access-m2nv7" (OuterVolumeSpecName: "kube-api-access-m2nv7") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "kube-api-access-m2nv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.125718 4606 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.126912 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-scripts" (OuterVolumeSpecName: "scripts") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.134870 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.169750 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.182439 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-config-data" (OuterVolumeSpecName: "config-data") pod "6c1be0c7-7122-4d22-abfc-8335101f93d7" (UID: "6c1be0c7-7122-4d22-abfc-8335101f93d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192251 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192276 4606 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beedd593-20ce-48a2-a997-3ab8b04c373f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192286 4606 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192296 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192306 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2nv7\" (UniqueName: \"kubernetes.io/projected/beedd593-20ce-48a2-a997-3ab8b04c373f-kube-api-access-m2nv7\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192315 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192339 4606 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.192347 4606 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1be0c7-7122-4d22-abfc-8335101f93d7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.218443 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.219306 4606 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.220736 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.252517 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-config-data" (OuterVolumeSpecName: "config-data") pod "beedd593-20ce-48a2-a997-3ab8b04c373f" (UID: "beedd593-20ce-48a2-a997-3ab8b04c373f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.296614 4606 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.300230 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.300264 4606 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.300277 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beedd593-20ce-48a2-a997-3ab8b04c373f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.328670 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.338580 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.374284 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5b37-account-create-update-pr4c4"] Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.394627 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395162 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395176 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395196 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="proxy-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395203 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="proxy-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395214 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-log" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395220 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-log" Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395231 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-log" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395236 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-log" Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395243 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-notification-agent" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395248 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-notification-agent" Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395254 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-central-agent" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395260 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-central-agent" Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395275 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395280 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: E0123 17:11:15.395288 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="sg-core" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395293 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="sg-core" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395463 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395480 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="sg-core" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395494 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" containerName="glance-log" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395505 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395562 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" containerName="glance-log" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395576 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="proxy-httpd" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395658 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-central-agent" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.395667 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" containerName="ceilometer-notification-agent" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.399789 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.404179 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.404517 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.412651 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.504467 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-scripts\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.504662 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrv58\" (UniqueName: \"kubernetes.io/projected/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-kube-api-access-lrv58\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.504738 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-config-data\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.504830 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-log-httpd\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.506756 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.506798 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-run-httpd\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.506819 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.610578 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.610860 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-run-httpd\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.610883 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.610919 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-scripts\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.610962 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrv58\" (UniqueName: \"kubernetes.io/projected/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-kube-api-access-lrv58\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.610990 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-config-data\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.611033 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-log-httpd\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.611452 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-log-httpd\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.618747 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-run-httpd\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.626556 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.632622 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-config-data\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.652345 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrv58\" (UniqueName: \"kubernetes.io/projected/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-kube-api-access-lrv58\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.678382 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-scripts\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.678480 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.709913 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.790196 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.815134 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-config\") pod \"7e867f98-5330-473c-aa2f-4c5863e70688\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.815213 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-httpd-config\") pod \"7e867f98-5330-473c-aa2f-4c5863e70688\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.815315 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpw6v\" (UniqueName: \"kubernetes.io/projected/7e867f98-5330-473c-aa2f-4c5863e70688-kube-api-access-gpw6v\") pod \"7e867f98-5330-473c-aa2f-4c5863e70688\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.815364 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-combined-ca-bundle\") pod \"7e867f98-5330-473c-aa2f-4c5863e70688\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.815394 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-ovndb-tls-certs\") pod \"7e867f98-5330-473c-aa2f-4c5863e70688\" (UID: \"7e867f98-5330-473c-aa2f-4c5863e70688\") " Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.823941 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.823950 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7e867f98-5330-473c-aa2f-4c5863e70688" (UID: "7e867f98-5330-473c-aa2f-4c5863e70688"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.833822 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e867f98-5330-473c-aa2f-4c5863e70688-kube-api-access-gpw6v" (OuterVolumeSpecName: "kube-api-access-gpw6v") pod "7e867f98-5330-473c-aa2f-4c5863e70688" (UID: "7e867f98-5330-473c-aa2f-4c5863e70688"). InnerVolumeSpecName "kube-api-access-gpw6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.876656 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6nkhr"] Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.893988 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"beedd593-20ce-48a2-a997-3ab8b04c373f","Type":"ContainerDied","Data":"dc52a0d797d97d29a47b3122d0c9f0587d44a602fc8eb796614687ab8f29f3f4"} Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.894051 4606 scope.go:117] "RemoveContainer" containerID="ae9b3c9445087d6ceed6c9421d26afc2ec8a39b0cb9e5d8ce0c4a68e3a385ea0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.894261 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.923129 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-config" (OuterVolumeSpecName: "config") pod "7e867f98-5330-473c-aa2f-4c5863e70688" (UID: "7e867f98-5330-473c-aa2f-4c5863e70688"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.927518 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fc6b8db9b-nkqzh" event={"ID":"7e867f98-5330-473c-aa2f-4c5863e70688","Type":"ContainerDied","Data":"e6341f7ae4ca8d4b7e12e5f51f5a9fd5be7c35d7b6fff1d53ccb934231f65567"} Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.927642 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fc6b8db9b-nkqzh" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.928677 4606 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.928696 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpw6v\" (UniqueName: \"kubernetes.io/projected/7e867f98-5330-473c-aa2f-4c5863e70688-kube-api-access-gpw6v\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.928707 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.938443 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e867f98-5330-473c-aa2f-4c5863e70688" (UID: "7e867f98-5330-473c-aa2f-4c5863e70688"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.939557 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c1be0c7-7122-4d22-abfc-8335101f93d7","Type":"ContainerDied","Data":"ff9025e3950a2bc9e106d28752694e8699b93d293f67b576b2bf444ee7f3dab1"} Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.939668 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.947221 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-c6g8c"] Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.953809 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5b37-account-create-update-pr4c4" event={"ID":"dd8d7c07-f254-476e-9f2e-7e1afdeddcad","Type":"ContainerStarted","Data":"d162427befa56a7e9ebb3908f74f50b83f205a6781a3939b3c2bb587159d710f"} Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.954011 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5b37-account-create-update-pr4c4" event={"ID":"dd8d7c07-f254-476e-9f2e-7e1afdeddcad","Type":"ContainerStarted","Data":"c8f7509f9f5e55c32ef4f4e29dfd005539ecf872289fb28d2df08b74ed9a57f2"} Jan 23 17:11:15 crc kubenswrapper[4606]: I0123 17:11:15.960270 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-71a8-account-create-update-k7h4q"] Jan 23 17:11:15 crc kubenswrapper[4606]: W0123 17:11:15.972304 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a24014_72c4_4240_92ac_f0a2cc2956e2.slice/crio-07c7911d137a90caaecb65d519fbdfe507fbe6bc73e105474a5e693fd6465b5a WatchSource:0}: Error finding container 07c7911d137a90caaecb65d519fbdfe507fbe6bc73e105474a5e693fd6465b5a: Status 404 returned error can't find the container with id 07c7911d137a90caaecb65d519fbdfe507fbe6bc73e105474a5e693fd6465b5a Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.000880 4606 scope.go:117] "RemoveContainer" containerID="a41585911dbecd7b4c8e4832aa3755c7c7e4ea568ad413a096d90dab83279c34" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.017071 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-df02-account-create-update-qqvbw"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.030987 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.043644 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cx5vj"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.078687 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-5b37-account-create-update-pr4c4" podStartSLOduration=10.078662248 podStartE2EDuration="10.078662248s" podCreationTimestamp="2026-01-23 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:15.972104069 +0000 UTC m=+1109.972296475" watchObservedRunningTime="2026-01-23 17:11:16.078662248 +0000 UTC m=+1110.078854654" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.117860 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.139144 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7e867f98-5330-473c-aa2f-4c5863e70688" (UID: "7e867f98-5330-473c-aa2f-4c5863e70688"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.152027 4606 scope.go:117] "RemoveContainer" containerID="bba25af25e90eff5fd8683e770b611e570b0647ea5ba8d01659f310ddfb83cf3" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.183218 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.234427 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: E0123 17:11:16.234912 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-api" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.234928 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-api" Jan 23 17:11:16 crc kubenswrapper[4606]: E0123 17:11:16.234944 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-httpd" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.234950 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-httpd" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.235164 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-api" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.235181 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" containerName="neutron-httpd" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.236148 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.238225 4606 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e867f98-5330-473c-aa2f-4c5863e70688-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.238955 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.239168 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.239432 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.239821 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wjzzh" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.284625 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.301929 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339604 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676mc\" (UniqueName: \"kubernetes.io/projected/f8318015-9694-4f24-a509-141912db7eb9-kube-api-access-676mc\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339677 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8318015-9694-4f24-a509-141912db7eb9-logs\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339733 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339765 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339794 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339827 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-config-data\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339888 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-scripts\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.339903 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8318015-9694-4f24-a509-141912db7eb9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.353719 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.439745 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c1be0c7-7122-4d22-abfc-8335101f93d7" path="/var/lib/kubelet/pods/6c1be0c7-7122-4d22-abfc-8335101f93d7/volumes" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.440808 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ff9b5e-713c-48e9-901a-23d3f0236653" path="/var/lib/kubelet/pods/81ff9b5e-713c-48e9-901a-23d3f0236653/volumes" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443316 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443353 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443405 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443471 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-config-data\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443609 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-scripts\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443636 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8318015-9694-4f24-a509-141912db7eb9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443686 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676mc\" (UniqueName: \"kubernetes.io/projected/f8318015-9694-4f24-a509-141912db7eb9-kube-api-access-676mc\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.443735 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8318015-9694-4f24-a509-141912db7eb9-logs\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.444128 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8318015-9694-4f24-a509-141912db7eb9-logs\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.446825 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.450383 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8318015-9694-4f24-a509-141912db7eb9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.462395 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beedd593-20ce-48a2-a997-3ab8b04c373f" path="/var/lib/kubelet/pods/beedd593-20ce-48a2-a997-3ab8b04c373f/volumes" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.463288 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.464623 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.464702 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.475216 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.483760 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-scripts\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.484675 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-config-data\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.485145 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.485327 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.490670 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fc6b8db9b-nkqzh"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.495701 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8318015-9694-4f24-a509-141912db7eb9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.504798 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676mc\" (UniqueName: \"kubernetes.io/projected/f8318015-9694-4f24-a509-141912db7eb9-kube-api-access-676mc\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.509333 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6fc6b8db9b-nkqzh"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.595449 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f8318015-9694-4f24-a509-141912db7eb9\") " pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.605814 4606 scope.go:117] "RemoveContainer" containerID="193e5a950c77c71b0baa7b145b272491c50b6361f2795946740d3b995aa28fbc" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662015 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662138 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662524 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e154c1-39df-4c97-b8f7-32591d612abd-logs\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662580 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rpgb\" (UniqueName: \"kubernetes.io/projected/04e154c1-39df-4c97-b8f7-32591d612abd-kube-api-access-8rpgb\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662647 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662894 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e154c1-39df-4c97-b8f7-32591d612abd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662937 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.662957 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.671386 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.739345 4606 scope.go:117] "RemoveContainer" containerID="cd6b97d70e4add71caeb23954067f81d25774803be51d1e8dba0badb66d7b904" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765260 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765366 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765426 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e154c1-39df-4c97-b8f7-32591d612abd-logs\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765464 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rpgb\" (UniqueName: \"kubernetes.io/projected/04e154c1-39df-4c97-b8f7-32591d612abd-kube-api-access-8rpgb\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765490 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765517 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e154c1-39df-4c97-b8f7-32591d612abd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765540 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.765554 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.766311 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e154c1-39df-4c97-b8f7-32591d612abd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.768399 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.768579 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e154c1-39df-4c97-b8f7-32591d612abd-logs\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.791175 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.817466 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.821428 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rpgb\" (UniqueName: \"kubernetes.io/projected/04e154c1-39df-4c97-b8f7-32591d612abd-kube-api-access-8rpgb\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.822288 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.824423 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e154c1-39df-4c97-b8f7-32591d612abd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.865286 4606 scope.go:117] "RemoveContainer" containerID="81202c239191192d6e7505f82860b63b8c62da4215d76afd1856e1514674ec88" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.896767 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.934988 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"04e154c1-39df-4c97-b8f7-32591d612abd\") " pod="openstack/glance-default-internal-api-0" Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.996204 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" event={"ID":"0722c265-6ff0-4f82-93c9-f803a72040f5","Type":"ContainerStarted","Data":"1c4bd2e85d0150c96766c21a16021460bf3a3a78cb2b27df9d9597c0008ff095"} Jan 23 17:11:16 crc kubenswrapper[4606]: I0123 17:11:16.996249 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" event={"ID":"0722c265-6ff0-4f82-93c9-f803a72040f5","Type":"ContainerStarted","Data":"ce250ac009e576f9f3554b2edce835a8ec29564010dfd1b4902e0f69efe2d900"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.023960 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" podStartSLOduration=11.023934561 podStartE2EDuration="11.023934561s" podCreationTimestamp="2026-01-23 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:17.016892896 +0000 UTC m=+1111.017085302" watchObservedRunningTime="2026-01-23 17:11:17.023934561 +0000 UTC m=+1111.024126967" Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.047365 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nkhr" event={"ID":"5c2b6537-c2b4-45ed-86ad-6c1018119db1","Type":"ContainerStarted","Data":"f47514ed056a009b1edbcc3a45c2bebcf8f0b7620344880260dd24a41dade3fe"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.049556 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerStarted","Data":"26c0742b3fc9a4e1ced7dddc94221e7360af460154232e488d06bd8a4ac33cdf"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.059442 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cx5vj" event={"ID":"5656c482-3ac8-450a-b302-9e8329674ca5","Type":"ContainerStarted","Data":"cc12b638edf5aeaae5f4db39d078f3ed3dd29c8d62d9ff067327458d421fbdd1"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.091174 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" event={"ID":"57a24014-72c4-4240-92ac-f0a2cc2956e2","Type":"ContainerStarted","Data":"7165f4684753a99fed01b05edda90067989d4833b1cfa8538a9b79a3210e2c61"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.091217 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" event={"ID":"57a24014-72c4-4240-92ac-f0a2cc2956e2","Type":"ContainerStarted","Data":"07c7911d137a90caaecb65d519fbdfe507fbe6bc73e105474a5e693fd6465b5a"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.114994 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-cx5vj" podStartSLOduration=11.114976477 podStartE2EDuration="11.114976477s" podCreationTimestamp="2026-01-23 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:17.102902529 +0000 UTC m=+1111.103094935" watchObservedRunningTime="2026-01-23 17:11:17.114976477 +0000 UTC m=+1111.115168883" Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.117870 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b26a0ac0-2f7e-48a0-aebd-20f7388d0634","Type":"ContainerStarted","Data":"2b16b9c7dca2bb8d066245dc9fb7f4a8991ba9f6907ff06f3fcebec61af721ff"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.135300 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" podStartSLOduration=11.135285278 podStartE2EDuration="11.135285278s" podCreationTimestamp="2026-01-23 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:17.134254332 +0000 UTC m=+1111.134446758" watchObservedRunningTime="2026-01-23 17:11:17.135285278 +0000 UTC m=+1111.135477684" Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.139883 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c6g8c" event={"ID":"b8c34151-9d7b-4d74-9dcd-ef62daa52555","Type":"ContainerStarted","Data":"c7bc736d751c3608c0b601c82da533c2f97175a4950c9810b106ee0aaf482237"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.199170 4606 generic.go:334] "Generic (PLEG): container finished" podID="dd8d7c07-f254-476e-9f2e-7e1afdeddcad" containerID="d162427befa56a7e9ebb3908f74f50b83f205a6781a3939b3c2bb587159d710f" exitCode=0 Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.199258 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5b37-account-create-update-pr4c4" event={"ID":"dd8d7c07-f254-476e-9f2e-7e1afdeddcad","Type":"ContainerDied","Data":"d162427befa56a7e9ebb3908f74f50b83f205a6781a3939b3c2bb587159d710f"} Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.232087 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.248328 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-c6g8c" podStartSLOduration=11.248309066 podStartE2EDuration="11.248309066s" podCreationTimestamp="2026-01-23 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:17.158809448 +0000 UTC m=+1111.159001854" watchObservedRunningTime="2026-01-23 17:11:17.248309066 +0000 UTC m=+1111.248501472" Jan 23 17:11:17 crc kubenswrapper[4606]: I0123 17:11:17.561397 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.112190 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 17:11:18 crc kubenswrapper[4606]: W0123 17:11:18.130918 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04e154c1_39df_4c97_b8f7_32591d612abd.slice/crio-28e4496d30407897c5e29034bcca9f2a2dfccb0d12c3bb9dc2b4570df17e4bb1 WatchSource:0}: Error finding container 28e4496d30407897c5e29034bcca9f2a2dfccb0d12c3bb9dc2b4570df17e4bb1: Status 404 returned error can't find the container with id 28e4496d30407897c5e29034bcca9f2a2dfccb0d12c3bb9dc2b4570df17e4bb1 Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.266260 4606 generic.go:334] "Generic (PLEG): container finished" podID="57a24014-72c4-4240-92ac-f0a2cc2956e2" containerID="7165f4684753a99fed01b05edda90067989d4833b1cfa8538a9b79a3210e2c61" exitCode=0 Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.267287 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" event={"ID":"57a24014-72c4-4240-92ac-f0a2cc2956e2","Type":"ContainerDied","Data":"7165f4684753a99fed01b05edda90067989d4833b1cfa8538a9b79a3210e2c61"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.305072 4606 generic.go:334] "Generic (PLEG): container finished" podID="0722c265-6ff0-4f82-93c9-f803a72040f5" containerID="1c4bd2e85d0150c96766c21a16021460bf3a3a78cb2b27df9d9597c0008ff095" exitCode=0 Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.305233 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" event={"ID":"0722c265-6ff0-4f82-93c9-f803a72040f5","Type":"ContainerDied","Data":"1c4bd2e85d0150c96766c21a16021460bf3a3a78cb2b27df9d9597c0008ff095"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.362071 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8318015-9694-4f24-a509-141912db7eb9","Type":"ContainerStarted","Data":"d219a90b38166b7cf9e1d49c2e80bee5663e99659f6d1af45524cfb7afe5da95"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.388527 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e867f98-5330-473c-aa2f-4c5863e70688" path="/var/lib/kubelet/pods/7e867f98-5330-473c-aa2f-4c5863e70688/volumes" Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.397287 4606 generic.go:334] "Generic (PLEG): container finished" podID="5656c482-3ac8-450a-b302-9e8329674ca5" containerID="af1076c67c05a1a67549c3549f507c0d50f2b54d14beded7100cd22fe468c7b1" exitCode=0 Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.397351 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cx5vj" event={"ID":"5656c482-3ac8-450a-b302-9e8329674ca5","Type":"ContainerDied","Data":"af1076c67c05a1a67549c3549f507c0d50f2b54d14beded7100cd22fe468c7b1"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.426854 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerStarted","Data":"653eb5c08e35f1bf83b13e0c163c51753c90561dd42ee368184f74452df16a50"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.448893 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b26a0ac0-2f7e-48a0-aebd-20f7388d0634","Type":"ContainerStarted","Data":"d1c8f7a1dd302b58cd3d4bdfdda57366346d92a93c5fbe3adb8cf0525f89a5ff"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.449164 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.452185 4606 generic.go:334] "Generic (PLEG): container finished" podID="b8c34151-9d7b-4d74-9dcd-ef62daa52555" containerID="094209db94003e883032f1317862a9da80988def9d4251696d2c3a96ca43ecb9" exitCode=0 Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.452287 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c6g8c" event={"ID":"b8c34151-9d7b-4d74-9dcd-ef62daa52555","Type":"ContainerDied","Data":"094209db94003e883032f1317862a9da80988def9d4251696d2c3a96ca43ecb9"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.456875 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"04e154c1-39df-4c97-b8f7-32591d612abd","Type":"ContainerStarted","Data":"28e4496d30407897c5e29034bcca9f2a2dfccb0d12c3bb9dc2b4570df17e4bb1"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.470826 4606 generic.go:334] "Generic (PLEG): container finished" podID="5c2b6537-c2b4-45ed-86ad-6c1018119db1" containerID="c009532333b43de0b4b6ddfc01b84a144096b5a31804ae6f35b4eee5a16344ee" exitCode=0 Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.471080 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nkhr" event={"ID":"5c2b6537-c2b4-45ed-86ad-6c1018119db1","Type":"ContainerDied","Data":"c009532333b43de0b4b6ddfc01b84a144096b5a31804ae6f35b4eee5a16344ee"} Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.616896 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57977896f6-qrk2s" podUID="59fc19de-8c69-43e4-bf0d-1cb8ea9a9120" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 23 17:11:18 crc kubenswrapper[4606]: I0123 17:11:18.972322 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.117466 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ks8q\" (UniqueName: \"kubernetes.io/projected/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-kube-api-access-2ks8q\") pod \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.117511 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-operator-scripts\") pod \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\" (UID: \"dd8d7c07-f254-476e-9f2e-7e1afdeddcad\") " Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.118815 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd8d7c07-f254-476e-9f2e-7e1afdeddcad" (UID: "dd8d7c07-f254-476e-9f2e-7e1afdeddcad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.126879 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-kube-api-access-2ks8q" (OuterVolumeSpecName: "kube-api-access-2ks8q") pod "dd8d7c07-f254-476e-9f2e-7e1afdeddcad" (UID: "dd8d7c07-f254-476e-9f2e-7e1afdeddcad"). InnerVolumeSpecName "kube-api-access-2ks8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.231832 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ks8q\" (UniqueName: \"kubernetes.io/projected/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-kube-api-access-2ks8q\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.231868 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8d7c07-f254-476e-9f2e-7e1afdeddcad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.492221 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerStarted","Data":"5ae921688cd8381bd15a4a29aef3705edfc5045e097204887da43f1b06f96db5"} Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.498237 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8318015-9694-4f24-a509-141912db7eb9","Type":"ContainerStarted","Data":"83a68979b0c1975910e364728aaddd245e31d811dc3b2d56d0f2dd96d51afeb1"} Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.503266 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5b37-account-create-update-pr4c4" event={"ID":"dd8d7c07-f254-476e-9f2e-7e1afdeddcad","Type":"ContainerDied","Data":"c8f7509f9f5e55c32ef4f4e29dfd005539ecf872289fb28d2df08b74ed9a57f2"} Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.503312 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8f7509f9f5e55c32ef4f4e29dfd005539ecf872289fb28d2df08b74ed9a57f2" Jan 23 17:11:19 crc kubenswrapper[4606]: I0123 17:11:19.503469 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b37-account-create-update-pr4c4" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.256808 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.365773 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbthv\" (UniqueName: \"kubernetes.io/projected/5c2b6537-c2b4-45ed-86ad-6c1018119db1-kube-api-access-vbthv\") pod \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.365864 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2b6537-c2b4-45ed-86ad-6c1018119db1-operator-scripts\") pod \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\" (UID: \"5c2b6537-c2b4-45ed-86ad-6c1018119db1\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.366881 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2b6537-c2b4-45ed-86ad-6c1018119db1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c2b6537-c2b4-45ed-86ad-6c1018119db1" (UID: "5c2b6537-c2b4-45ed-86ad-6c1018119db1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.391187 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2b6537-c2b4-45ed-86ad-6c1018119db1-kube-api-access-vbthv" (OuterVolumeSpecName: "kube-api-access-vbthv") pod "5c2b6537-c2b4-45ed-86ad-6c1018119db1" (UID: "5c2b6537-c2b4-45ed-86ad-6c1018119db1"). InnerVolumeSpecName "kube-api-access-vbthv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.467756 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbthv\" (UniqueName: \"kubernetes.io/projected/5c2b6537-c2b4-45ed-86ad-6c1018119db1-kube-api-access-vbthv\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.467790 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2b6537-c2b4-45ed-86ad-6c1018119db1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.526451 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6nkhr" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.600896 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=20.600878201 podStartE2EDuration="20.600878201s" podCreationTimestamp="2026-01-23 17:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:20.597272563 +0000 UTC m=+1114.597464969" watchObservedRunningTime="2026-01-23 17:11:20.600878201 +0000 UTC m=+1114.601070597" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.606368 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613040 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"04e154c1-39df-4c97-b8f7-32591d612abd","Type":"ContainerStarted","Data":"6f02b4cf753692c5f1aee3632ecc9d08c1cb6b3b712954fa1bbf1b5c99e46174"} Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613070 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6nkhr" event={"ID":"5c2b6537-c2b4-45ed-86ad-6c1018119db1","Type":"ContainerDied","Data":"f47514ed056a009b1edbcc3a45c2bebcf8f0b7620344880260dd24a41dade3fe"} Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613088 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47514ed056a009b1edbcc3a45c2bebcf8f0b7620344880260dd24a41dade3fe" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613101 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cx5vj" event={"ID":"5656c482-3ac8-450a-b302-9e8329674ca5","Type":"ContainerDied","Data":"cc12b638edf5aeaae5f4db39d078f3ed3dd29c8d62d9ff067327458d421fbdd1"} Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613114 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc12b638edf5aeaae5f4db39d078f3ed3dd29c8d62d9ff067327458d421fbdd1" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613123 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" event={"ID":"57a24014-72c4-4240-92ac-f0a2cc2956e2","Type":"ContainerDied","Data":"07c7911d137a90caaecb65d519fbdfe507fbe6bc73e105474a5e693fd6465b5a"} Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613135 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c7911d137a90caaecb65d519fbdfe507fbe6bc73e105474a5e693fd6465b5a" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613144 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" event={"ID":"0722c265-6ff0-4f82-93c9-f803a72040f5","Type":"ContainerDied","Data":"ce250ac009e576f9f3554b2edce835a8ec29564010dfd1b4902e0f69efe2d900"} Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613155 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce250ac009e576f9f3554b2edce835a8ec29564010dfd1b4902e0f69efe2d900" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613174 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b26a0ac0-2f7e-48a0-aebd-20f7388d0634","Type":"ContainerStarted","Data":"894907cdf836e5524dfe3346b263f936aa9f2a8a084b4ff7ff679d2bea3537d5"} Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613190 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c6g8c" event={"ID":"b8c34151-9d7b-4d74-9dcd-ef62daa52555","Type":"ContainerDied","Data":"c7bc736d751c3608c0b601c82da533c2f97175a4950c9810b106ee0aaf482237"} Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.613202 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7bc736d751c3608c0b601c82da533c2f97175a4950c9810b106ee0aaf482237" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.622477 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.683604 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.683958 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.684136 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775194 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c34151-9d7b-4d74-9dcd-ef62daa52555-operator-scripts\") pod \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775229 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmhmb\" (UniqueName: \"kubernetes.io/projected/0722c265-6ff0-4f82-93c9-f803a72040f5-kube-api-access-hmhmb\") pod \"0722c265-6ff0-4f82-93c9-f803a72040f5\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775296 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5j6x\" (UniqueName: \"kubernetes.io/projected/57a24014-72c4-4240-92ac-f0a2cc2956e2-kube-api-access-c5j6x\") pod \"57a24014-72c4-4240-92ac-f0a2cc2956e2\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775352 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a24014-72c4-4240-92ac-f0a2cc2956e2-operator-scripts\") pod \"57a24014-72c4-4240-92ac-f0a2cc2956e2\" (UID: \"57a24014-72c4-4240-92ac-f0a2cc2956e2\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775385 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66g5x\" (UniqueName: \"kubernetes.io/projected/5656c482-3ac8-450a-b302-9e8329674ca5-kube-api-access-66g5x\") pod \"5656c482-3ac8-450a-b302-9e8329674ca5\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775406 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0722c265-6ff0-4f82-93c9-f803a72040f5-operator-scripts\") pod \"0722c265-6ff0-4f82-93c9-f803a72040f5\" (UID: \"0722c265-6ff0-4f82-93c9-f803a72040f5\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775456 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh22q\" (UniqueName: \"kubernetes.io/projected/b8c34151-9d7b-4d74-9dcd-ef62daa52555-kube-api-access-nh22q\") pod \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\" (UID: \"b8c34151-9d7b-4d74-9dcd-ef62daa52555\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.775515 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5656c482-3ac8-450a-b302-9e8329674ca5-operator-scripts\") pod \"5656c482-3ac8-450a-b302-9e8329674ca5\" (UID: \"5656c482-3ac8-450a-b302-9e8329674ca5\") " Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.776058 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5656c482-3ac8-450a-b302-9e8329674ca5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5656c482-3ac8-450a-b302-9e8329674ca5" (UID: "5656c482-3ac8-450a-b302-9e8329674ca5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.777157 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c34151-9d7b-4d74-9dcd-ef62daa52555-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8c34151-9d7b-4d74-9dcd-ef62daa52555" (UID: "b8c34151-9d7b-4d74-9dcd-ef62daa52555"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.777186 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a24014-72c4-4240-92ac-f0a2cc2956e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57a24014-72c4-4240-92ac-f0a2cc2956e2" (UID: "57a24014-72c4-4240-92ac-f0a2cc2956e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.777975 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5656c482-3ac8-450a-b302-9e8329674ca5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.778023 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c34151-9d7b-4d74-9dcd-ef62daa52555-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.778036 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a24014-72c4-4240-92ac-f0a2cc2956e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.781141 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0722c265-6ff0-4f82-93c9-f803a72040f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0722c265-6ff0-4f82-93c9-f803a72040f5" (UID: "0722c265-6ff0-4f82-93c9-f803a72040f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.782889 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0722c265-6ff0-4f82-93c9-f803a72040f5-kube-api-access-hmhmb" (OuterVolumeSpecName: "kube-api-access-hmhmb") pod "0722c265-6ff0-4f82-93c9-f803a72040f5" (UID: "0722c265-6ff0-4f82-93c9-f803a72040f5"). InnerVolumeSpecName "kube-api-access-hmhmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.786362 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a24014-72c4-4240-92ac-f0a2cc2956e2-kube-api-access-c5j6x" (OuterVolumeSpecName: "kube-api-access-c5j6x") pod "57a24014-72c4-4240-92ac-f0a2cc2956e2" (UID: "57a24014-72c4-4240-92ac-f0a2cc2956e2"). InnerVolumeSpecName "kube-api-access-c5j6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.792920 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5656c482-3ac8-450a-b302-9e8329674ca5-kube-api-access-66g5x" (OuterVolumeSpecName: "kube-api-access-66g5x") pod "5656c482-3ac8-450a-b302-9e8329674ca5" (UID: "5656c482-3ac8-450a-b302-9e8329674ca5"). InnerVolumeSpecName "kube-api-access-66g5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.797673 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c34151-9d7b-4d74-9dcd-ef62daa52555-kube-api-access-nh22q" (OuterVolumeSpecName: "kube-api-access-nh22q") pod "b8c34151-9d7b-4d74-9dcd-ef62daa52555" (UID: "b8c34151-9d7b-4d74-9dcd-ef62daa52555"). InnerVolumeSpecName "kube-api-access-nh22q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.879554 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmhmb\" (UniqueName: \"kubernetes.io/projected/0722c265-6ff0-4f82-93c9-f803a72040f5-kube-api-access-hmhmb\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.879605 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5j6x\" (UniqueName: \"kubernetes.io/projected/57a24014-72c4-4240-92ac-f0a2cc2956e2-kube-api-access-c5j6x\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.879614 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66g5x\" (UniqueName: \"kubernetes.io/projected/5656c482-3ac8-450a-b302-9e8329674ca5-kube-api-access-66g5x\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.879626 4606 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0722c265-6ff0-4f82-93c9-f803a72040f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:20 crc kubenswrapper[4606]: I0123 17:11:20.879642 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh22q\" (UniqueName: \"kubernetes.io/projected/b8c34151-9d7b-4d74-9dcd-ef62daa52555-kube-api-access-nh22q\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.590856 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f8318015-9694-4f24-a509-141912db7eb9","Type":"ContainerStarted","Data":"86c23fd5605cb1f3812315ee247f7399e925361dd0a3e8c4f693b2ca0565316e"} Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.593064 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerStarted","Data":"bc43b49af6e5b9b64fb3f3c9ef37444ef489951e640f2421be17bf40fdff862b"} Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.595685 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c6g8c" Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.596072 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cx5vj" Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.598364 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"04e154c1-39df-4c97-b8f7-32591d612abd","Type":"ContainerStarted","Data":"7b9a3d7d264d17ce055d46d6ccd88ca51bc5c83cb5a2bc720c94d0640b1eb302"} Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.598538 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-71a8-account-create-update-k7h4q" Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.599503 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-df02-account-create-update-qqvbw" Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.638409 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.638388859 podStartE2EDuration="6.638388859s" podCreationTimestamp="2026-01-23 17:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:21.630369992 +0000 UTC m=+1115.630562418" watchObservedRunningTime="2026-01-23 17:11:21.638388859 +0000 UTC m=+1115.638581265" Jan 23 17:11:21 crc kubenswrapper[4606]: I0123 17:11:21.719856 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.719839058 podStartE2EDuration="5.719839058s" podCreationTimestamp="2026-01-23 17:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:21.689795237 +0000 UTC m=+1115.689987653" watchObservedRunningTime="2026-01-23 17:11:21.719839058 +0000 UTC m=+1115.720031464" Jan 23 17:11:23 crc kubenswrapper[4606]: I0123 17:11:23.616133 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerStarted","Data":"580bca5e5247addab1720789633278b76b04120c1df2b7a918e9eca71c0c120a"} Jan 23 17:11:23 crc kubenswrapper[4606]: I0123 17:11:23.618708 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:11:23 crc kubenswrapper[4606]: I0123 17:11:23.649895 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.109715269 podStartE2EDuration="8.649878957s" podCreationTimestamp="2026-01-23 17:11:15 +0000 UTC" firstStartedPulling="2026-01-23 17:11:16.810877003 +0000 UTC m=+1110.811069409" lastFinishedPulling="2026-01-23 17:11:22.351040681 +0000 UTC m=+1116.351233097" observedRunningTime="2026-01-23 17:11:23.642977897 +0000 UTC m=+1117.643170303" watchObservedRunningTime="2026-01-23 17:11:23.649878957 +0000 UTC m=+1117.650071363" Jan 23 17:11:25 crc kubenswrapper[4606]: I0123 17:11:25.240195 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.642866 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-central-agent" containerID="cri-o://653eb5c08e35f1bf83b13e0c163c51753c90561dd42ee368184f74452df16a50" gracePeriod=30 Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.642935 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="sg-core" containerID="cri-o://bc43b49af6e5b9b64fb3f3c9ef37444ef489951e640f2421be17bf40fdff862b" gracePeriod=30 Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.642957 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-notification-agent" containerID="cri-o://5ae921688cd8381bd15a4a29aef3705edfc5045e097204887da43f1b06f96db5" gracePeriod=30 Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.642972 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="proxy-httpd" containerID="cri-o://580bca5e5247addab1720789633278b76b04120c1df2b7a918e9eca71c0c120a" gracePeriod=30 Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.664347 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzltv"] Jan 23 17:11:26 crc kubenswrapper[4606]: E0123 17:11:26.664742 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5656c482-3ac8-450a-b302-9e8329674ca5" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.664759 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="5656c482-3ac8-450a-b302-9e8329674ca5" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: E0123 17:11:26.664768 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c34151-9d7b-4d74-9dcd-ef62daa52555" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.664775 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c34151-9d7b-4d74-9dcd-ef62daa52555" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: E0123 17:11:26.664792 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0722c265-6ff0-4f82-93c9-f803a72040f5" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.664798 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="0722c265-6ff0-4f82-93c9-f803a72040f5" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: E0123 17:11:26.664809 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8d7c07-f254-476e-9f2e-7e1afdeddcad" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.664815 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8d7c07-f254-476e-9f2e-7e1afdeddcad" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: E0123 17:11:26.664833 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2b6537-c2b4-45ed-86ad-6c1018119db1" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.664840 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2b6537-c2b4-45ed-86ad-6c1018119db1" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: E0123 17:11:26.664852 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a24014-72c4-4240-92ac-f0a2cc2956e2" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.664859 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a24014-72c4-4240-92ac-f0a2cc2956e2" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.665017 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a24014-72c4-4240-92ac-f0a2cc2956e2" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.665030 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="0722c265-6ff0-4f82-93c9-f803a72040f5" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.665040 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="5656c482-3ac8-450a-b302-9e8329674ca5" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.665052 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c34151-9d7b-4d74-9dcd-ef62daa52555" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.665062 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2b6537-c2b4-45ed-86ad-6c1018119db1" containerName="mariadb-database-create" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.665077 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8d7c07-f254-476e-9f2e-7e1afdeddcad" containerName="mariadb-account-create-update" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.666379 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.672380 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.672424 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.710962 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.711284 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v5b5x" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.711440 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.753893 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.757873 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.786355 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-scripts\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.786472 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbqh\" (UniqueName: \"kubernetes.io/projected/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-kube-api-access-8gbqh\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.786546 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-config-data\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.786614 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.807565 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzltv"] Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.889749 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-scripts\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.889829 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbqh\" (UniqueName: \"kubernetes.io/projected/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-kube-api-access-8gbqh\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.889865 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-config-data\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.889892 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.900994 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.905362 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-config-data\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.913166 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-scripts\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.920168 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbqh\" (UniqueName: \"kubernetes.io/projected/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-kube-api-access-8gbqh\") pod \"nova-cell0-conductor-db-sync-zzltv\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:26 crc kubenswrapper[4606]: I0123 17:11:26.997124 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.234550 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.234873 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.320207 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.321040 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.560938 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzltv"] Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.651293 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzltv" event={"ID":"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e","Type":"ContainerStarted","Data":"8e00f841e64a8c495274a15ea90dfa584e3097c0d3b4ca0d04eb78181b7c1628"} Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.654545 4606 generic.go:334] "Generic (PLEG): container finished" podID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerID="bc43b49af6e5b9b64fb3f3c9ef37444ef489951e640f2421be17bf40fdff862b" exitCode=2 Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.654577 4606 generic.go:334] "Generic (PLEG): container finished" podID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerID="5ae921688cd8381bd15a4a29aef3705edfc5045e097204887da43f1b06f96db5" exitCode=0 Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.654637 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerDied","Data":"bc43b49af6e5b9b64fb3f3c9ef37444ef489951e640f2421be17bf40fdff862b"} Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.654713 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerDied","Data":"5ae921688cd8381bd15a4a29aef3705edfc5045e097204887da43f1b06f96db5"} Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.654904 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.654958 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.655122 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 17:11:27 crc kubenswrapper[4606]: I0123 17:11:27.655259 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 17:11:28 crc kubenswrapper[4606]: I0123 17:11:28.668792 4606 generic.go:334] "Generic (PLEG): container finished" podID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerID="580bca5e5247addab1720789633278b76b04120c1df2b7a918e9eca71c0c120a" exitCode=0 Jan 23 17:11:28 crc kubenswrapper[4606]: I0123 17:11:28.669459 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerDied","Data":"580bca5e5247addab1720789633278b76b04120c1df2b7a918e9eca71c0c120a"} Jan 23 17:11:29 crc kubenswrapper[4606]: I0123 17:11:29.162700 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 17:11:30 crc kubenswrapper[4606]: I0123 17:11:30.513737 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:30 crc kubenswrapper[4606]: I0123 17:11:30.514349 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:11:30 crc kubenswrapper[4606]: I0123 17:11:30.522310 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 17:11:30 crc kubenswrapper[4606]: I0123 17:11:30.722494 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e","Type":"ContainerStarted","Data":"a5af3c59aa607905647b2f568ee5e38e7e2ca681d8ed9173d4b1b0609af3b179"} Jan 23 17:11:30 crc kubenswrapper[4606]: I0123 17:11:30.754708 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.515105958 podStartE2EDuration="39.754692928s" podCreationTimestamp="2026-01-23 17:10:51 +0000 UTC" firstStartedPulling="2026-01-23 17:10:52.637829464 +0000 UTC m=+1086.638021880" lastFinishedPulling="2026-01-23 17:11:29.877416444 +0000 UTC m=+1123.877608850" observedRunningTime="2026-01-23 17:11:30.751951 +0000 UTC m=+1124.752143406" watchObservedRunningTime="2026-01-23 17:11:30.754692928 +0000 UTC m=+1124.754885334" Jan 23 17:11:31 crc kubenswrapper[4606]: I0123 17:11:31.717558 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 17:11:31 crc kubenswrapper[4606]: I0123 17:11:31.717951 4606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:11:32 crc kubenswrapper[4606]: I0123 17:11:32.035401 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 17:11:32 crc kubenswrapper[4606]: I0123 17:11:32.246696 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:11:32 crc kubenswrapper[4606]: I0123 17:11:32.528501 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:11:35 crc kubenswrapper[4606]: I0123 17:11:35.251490 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:11:35 crc kubenswrapper[4606]: I0123 17:11:35.309031 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57977896f6-qrk2s" Jan 23 17:11:35 crc kubenswrapper[4606]: I0123 17:11:35.379990 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b9775877b-mcc82"] Jan 23 17:11:35 crc kubenswrapper[4606]: I0123 17:11:35.773563 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon-log" containerID="cri-o://a7070e36597fcdf8aa3834affbe73a3753b346f31476a5736d178e186a804eb1" gracePeriod=30 Jan 23 17:11:35 crc kubenswrapper[4606]: I0123 17:11:35.773653 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" containerID="cri-o://e90a8d5e00adaf8d666cda264fef91b8943b952d0cc9298acd9c7b164f5b85c2" gracePeriod=30 Jan 23 17:11:38 crc kubenswrapper[4606]: I0123 17:11:38.810204 4606 generic.go:334] "Generic (PLEG): container finished" podID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerID="653eb5c08e35f1bf83b13e0c163c51753c90561dd42ee368184f74452df16a50" exitCode=0 Jan 23 17:11:38 crc kubenswrapper[4606]: I0123 17:11:38.810417 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerDied","Data":"653eb5c08e35f1bf83b13e0c163c51753c90561dd42ee368184f74452df16a50"} Jan 23 17:11:38 crc kubenswrapper[4606]: I0123 17:11:38.941891 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:47540->10.217.0.145:8443: read: connection reset by peer" Jan 23 17:11:39 crc kubenswrapper[4606]: I0123 17:11:39.828535 4606 generic.go:334] "Generic (PLEG): container finished" podID="1dd13584-a1b9-4891-85a7-01afd519f746" containerID="e90a8d5e00adaf8d666cda264fef91b8943b952d0cc9298acd9c7b164f5b85c2" exitCode=0 Jan 23 17:11:39 crc kubenswrapper[4606]: I0123 17:11:39.828623 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerDied","Data":"e90a8d5e00adaf8d666cda264fef91b8943b952d0cc9298acd9c7b164f5b85c2"} Jan 23 17:11:39 crc kubenswrapper[4606]: I0123 17:11:39.828926 4606 scope.go:117] "RemoveContainer" containerID="3acaf29972f41a7b4f105c72ea9d73e660deeda3e9c45ee06e8af4bfd1e970f7" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.416203 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.477862 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-combined-ca-bundle\") pod \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.478531 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-run-httpd\") pod \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.478678 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-scripts\") pod \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.478778 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-config-data\") pod \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.478895 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-log-httpd\") pod \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.478992 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-sg-core-conf-yaml\") pod \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.479108 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrv58\" (UniqueName: \"kubernetes.io/projected/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-kube-api-access-lrv58\") pod \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\" (UID: \"58ce1e0c-a44a-499b-8fca-aab76e6c56f7\") " Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.481100 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "58ce1e0c-a44a-499b-8fca-aab76e6c56f7" (UID: "58ce1e0c-a44a-499b-8fca-aab76e6c56f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.482255 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "58ce1e0c-a44a-499b-8fca-aab76e6c56f7" (UID: "58ce1e0c-a44a-499b-8fca-aab76e6c56f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.492050 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-kube-api-access-lrv58" (OuterVolumeSpecName: "kube-api-access-lrv58") pod "58ce1e0c-a44a-499b-8fca-aab76e6c56f7" (UID: "58ce1e0c-a44a-499b-8fca-aab76e6c56f7"). InnerVolumeSpecName "kube-api-access-lrv58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.538184 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-scripts" (OuterVolumeSpecName: "scripts") pod "58ce1e0c-a44a-499b-8fca-aab76e6c56f7" (UID: "58ce1e0c-a44a-499b-8fca-aab76e6c56f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.549466 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "58ce1e0c-a44a-499b-8fca-aab76e6c56f7" (UID: "58ce1e0c-a44a-499b-8fca-aab76e6c56f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.583056 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrv58\" (UniqueName: \"kubernetes.io/projected/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-kube-api-access-lrv58\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.583117 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.583126 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.583136 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.583145 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.694165 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58ce1e0c-a44a-499b-8fca-aab76e6c56f7" (UID: "58ce1e0c-a44a-499b-8fca-aab76e6c56f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.700538 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-config-data" (OuterVolumeSpecName: "config-data") pod "58ce1e0c-a44a-499b-8fca-aab76e6c56f7" (UID: "58ce1e0c-a44a-499b-8fca-aab76e6c56f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.786258 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.786306 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58ce1e0c-a44a-499b-8fca-aab76e6c56f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.841183 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzltv" event={"ID":"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e","Type":"ContainerStarted","Data":"8274c717de743f8180f89766f81094c92f073a8828d2289c0d20b05bc4016d40"} Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.843400 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58ce1e0c-a44a-499b-8fca-aab76e6c56f7","Type":"ContainerDied","Data":"26c0742b3fc9a4e1ced7dddc94221e7360af460154232e488d06bd8a4ac33cdf"} Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.843435 4606 scope.go:117] "RemoveContainer" containerID="580bca5e5247addab1720789633278b76b04120c1df2b7a918e9eca71c0c120a" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.843567 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.878159 4606 scope.go:117] "RemoveContainer" containerID="bc43b49af6e5b9b64fb3f3c9ef37444ef489951e640f2421be17bf40fdff862b" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.895425 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zzltv" podStartSLOduration=2.227562569 podStartE2EDuration="14.895408451s" podCreationTimestamp="2026-01-23 17:11:26 +0000 UTC" firstStartedPulling="2026-01-23 17:11:27.582461393 +0000 UTC m=+1121.582653799" lastFinishedPulling="2026-01-23 17:11:40.250307275 +0000 UTC m=+1134.250499681" observedRunningTime="2026-01-23 17:11:40.870074327 +0000 UTC m=+1134.870266753" watchObservedRunningTime="2026-01-23 17:11:40.895408451 +0000 UTC m=+1134.895600857" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.899178 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.909637 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.911320 4606 scope.go:117] "RemoveContainer" containerID="5ae921688cd8381bd15a4a29aef3705edfc5045e097204887da43f1b06f96db5" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.942544 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:40 crc kubenswrapper[4606]: E0123 17:11:40.943122 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-notification-agent" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943145 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-notification-agent" Jan 23 17:11:40 crc kubenswrapper[4606]: E0123 17:11:40.943178 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-central-agent" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943188 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-central-agent" Jan 23 17:11:40 crc kubenswrapper[4606]: E0123 17:11:40.943211 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="proxy-httpd" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943219 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="proxy-httpd" Jan 23 17:11:40 crc kubenswrapper[4606]: E0123 17:11:40.943236 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="sg-core" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943245 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="sg-core" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943458 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-central-agent" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943487 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="proxy-httpd" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943503 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="ceilometer-notification-agent" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.943520 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" containerName="sg-core" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.945553 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.950170 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.951725 4606 scope.go:117] "RemoveContainer" containerID="653eb5c08e35f1bf83b13e0c163c51753c90561dd42ee368184f74452df16a50" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.952246 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.952541 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.997555 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-scripts\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.997644 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.997673 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.997696 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-log-httpd\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.997746 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-run-httpd\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.997826 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742cw\" (UniqueName: \"kubernetes.io/projected/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-kube-api-access-742cw\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:40 crc kubenswrapper[4606]: I0123 17:11:40.997876 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-config-data\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.099888 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-742cw\" (UniqueName: \"kubernetes.io/projected/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-kube-api-access-742cw\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.099956 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-config-data\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.099989 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-scripts\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.100026 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.100042 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.100059 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-log-httpd\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.100120 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-run-httpd\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.100783 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-log-httpd\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.100836 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-run-httpd\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.103885 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.104095 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.104286 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-config-data\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.120830 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-scripts\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.123372 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-742cw\" (UniqueName: \"kubernetes.io/projected/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-kube-api-access-742cw\") pod \"ceilometer-0\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.325997 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:41 crc kubenswrapper[4606]: I0123 17:11:41.881514 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:41 crc kubenswrapper[4606]: W0123 17:11:41.892632 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb66ddbd_3b53_4f4b_942f_87f16d0bc3fc.slice/crio-39f7c1d25c1bf4b1f0322e924e88cc18584eb0d4e1ce13def2dce023620cd588 WatchSource:0}: Error finding container 39f7c1d25c1bf4b1f0322e924e88cc18584eb0d4e1ce13def2dce023620cd588: Status 404 returned error can't find the container with id 39f7c1d25c1bf4b1f0322e924e88cc18584eb0d4e1ce13def2dce023620cd588 Jan 23 17:11:42 crc kubenswrapper[4606]: I0123 17:11:42.378077 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ce1e0c-a44a-499b-8fca-aab76e6c56f7" path="/var/lib/kubelet/pods/58ce1e0c-a44a-499b-8fca-aab76e6c56f7/volumes" Jan 23 17:11:42 crc kubenswrapper[4606]: I0123 17:11:42.864412 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerStarted","Data":"dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029"} Jan 23 17:11:42 crc kubenswrapper[4606]: I0123 17:11:42.864672 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerStarted","Data":"39f7c1d25c1bf4b1f0322e924e88cc18584eb0d4e1ce13def2dce023620cd588"} Jan 23 17:11:43 crc kubenswrapper[4606]: I0123 17:11:43.877537 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerStarted","Data":"d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694"} Jan 23 17:11:44 crc kubenswrapper[4606]: I0123 17:11:44.896370 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerStarted","Data":"3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929"} Jan 23 17:11:45 crc kubenswrapper[4606]: I0123 17:11:45.908446 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerStarted","Data":"d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22"} Jan 23 17:11:45 crc kubenswrapper[4606]: I0123 17:11:45.908831 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:11:45 crc kubenswrapper[4606]: I0123 17:11:45.936757 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.350897901 podStartE2EDuration="5.936741752s" podCreationTimestamp="2026-01-23 17:11:40 +0000 UTC" firstStartedPulling="2026-01-23 17:11:41.90664409 +0000 UTC m=+1135.906836496" lastFinishedPulling="2026-01-23 17:11:45.492487941 +0000 UTC m=+1139.492680347" observedRunningTime="2026-01-23 17:11:45.926619562 +0000 UTC m=+1139.926811978" watchObservedRunningTime="2026-01-23 17:11:45.936741752 +0000 UTC m=+1139.936934158" Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.404605 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.405251 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-central-agent" containerID="cri-o://dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029" gracePeriod=30 Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.405382 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="proxy-httpd" containerID="cri-o://d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22" gracePeriod=30 Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.405418 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="sg-core" containerID="cri-o://3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929" gracePeriod=30 Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.405448 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-notification-agent" containerID="cri-o://d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694" gracePeriod=30 Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.436986 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.946316 4606 generic.go:334] "Generic (PLEG): container finished" podID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerID="d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22" exitCode=0 Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.946789 4606 generic.go:334] "Generic (PLEG): container finished" podID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerID="3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929" exitCode=2 Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.946849 4606 generic.go:334] "Generic (PLEG): container finished" podID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerID="d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694" exitCode=0 Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.946891 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerDied","Data":"d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22"} Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.947125 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerDied","Data":"3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929"} Jan 23 17:11:48 crc kubenswrapper[4606]: I0123 17:11:48.947145 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerDied","Data":"d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694"} Jan 23 17:11:53 crc kubenswrapper[4606]: I0123 17:11:53.985842 4606 generic.go:334] "Generic (PLEG): container finished" podID="3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" containerID="8274c717de743f8180f89766f81094c92f073a8828d2289c0d20b05bc4016d40" exitCode=0 Jan 23 17:11:53 crc kubenswrapper[4606]: I0123 17:11:53.985927 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzltv" event={"ID":"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e","Type":"ContainerDied","Data":"8274c717de743f8180f89766f81094c92f073a8828d2289c0d20b05bc4016d40"} Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.374751 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.424928 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gbqh\" (UniqueName: \"kubernetes.io/projected/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-kube-api-access-8gbqh\") pod \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.425474 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-scripts\") pod \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.425546 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-combined-ca-bundle\") pod \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.426132 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-config-data\") pod \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\" (UID: \"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e\") " Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.432480 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-kube-api-access-8gbqh" (OuterVolumeSpecName: "kube-api-access-8gbqh") pod "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" (UID: "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e"). InnerVolumeSpecName "kube-api-access-8gbqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.435448 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-scripts" (OuterVolumeSpecName: "scripts") pod "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" (UID: "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.457878 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" (UID: "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.462485 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-config-data" (OuterVolumeSpecName: "config-data") pod "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" (UID: "3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.527556 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.527618 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.527632 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gbqh\" (UniqueName: \"kubernetes.io/projected/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-kube-api-access-8gbqh\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.527646 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:55 crc kubenswrapper[4606]: I0123 17:11:55.961259 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.002869 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zzltv" event={"ID":"3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e","Type":"ContainerDied","Data":"8e00f841e64a8c495274a15ea90dfa584e3097c0d3b4ca0d04eb78181b7c1628"} Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.002909 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e00f841e64a8c495274a15ea90dfa584e3097c0d3b4ca0d04eb78181b7c1628" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.002908 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zzltv" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.007736 4606 generic.go:334] "Generic (PLEG): container finished" podID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerID="dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029" exitCode=0 Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.007836 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerDied","Data":"dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029"} Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.007910 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc","Type":"ContainerDied","Data":"39f7c1d25c1bf4b1f0322e924e88cc18584eb0d4e1ce13def2dce023620cd588"} Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.007971 4606 scope.go:117] "RemoveContainer" containerID="d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.008144 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.034453 4606 scope.go:117] "RemoveContainer" containerID="3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.060575 4606 scope.go:117] "RemoveContainer" containerID="d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.089937 4606 scope.go:117] "RemoveContainer" containerID="dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.097958 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.098351 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="proxy-httpd" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.098418 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="proxy-httpd" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.098481 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-central-agent" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.098539 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-central-agent" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.098612 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" containerName="nova-cell0-conductor-db-sync" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.098916 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" containerName="nova-cell0-conductor-db-sync" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.099004 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="sg-core" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.099064 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="sg-core" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.099121 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-notification-agent" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.099177 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-notification-agent" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.099375 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" containerName="nova-cell0-conductor-db-sync" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.099440 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="proxy-httpd" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.099507 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-notification-agent" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.099558 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="sg-core" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.099640 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" containerName="ceilometer-central-agent" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.100250 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.102695 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v5b5x" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.103143 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.114484 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.135526 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-scripts\") pod \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.135672 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-742cw\" (UniqueName: \"kubernetes.io/projected/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-kube-api-access-742cw\") pod \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.135753 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-combined-ca-bundle\") pod \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.135845 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-sg-core-conf-yaml\") pod \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.135974 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-config-data\") pod \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.136108 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-run-httpd\") pod \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.136242 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-log-httpd\") pod \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\" (UID: \"db66ddbd-3b53-4f4b-942f-87f16d0bc3fc\") " Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.137236 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" (UID: "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.149256 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" (UID: "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.159760 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-scripts" (OuterVolumeSpecName: "scripts") pod "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" (UID: "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.160544 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-kube-api-access-742cw" (OuterVolumeSpecName: "kube-api-access-742cw") pod "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" (UID: "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc"). InnerVolumeSpecName "kube-api-access-742cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.161100 4606 scope.go:117] "RemoveContainer" containerID="d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.161848 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22\": container with ID starting with d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22 not found: ID does not exist" containerID="d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.161948 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22"} err="failed to get container status \"d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22\": rpc error: code = NotFound desc = could not find container \"d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22\": container with ID starting with d2b2a8df4393a2e002decbc6e1eb53d9eb24730dac655abcd16ba0b979d6fb22 not found: ID does not exist" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.162049 4606 scope.go:117] "RemoveContainer" containerID="3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.163133 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929\": container with ID starting with 3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929 not found: ID does not exist" containerID="3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.163244 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929"} err="failed to get container status \"3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929\": rpc error: code = NotFound desc = could not find container \"3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929\": container with ID starting with 3f3473d021521aad97482aed63f4bf7ede9cf5539869e39ad71a8441bf73e929 not found: ID does not exist" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.163345 4606 scope.go:117] "RemoveContainer" containerID="d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.163817 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694\": container with ID starting with d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694 not found: ID does not exist" containerID="d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.163857 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694"} err="failed to get container status \"d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694\": rpc error: code = NotFound desc = could not find container \"d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694\": container with ID starting with d402fe43fb220bea04bca787abf0f1c46fb76f4654aa6ed4338a18ad7fe79694 not found: ID does not exist" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.163880 4606 scope.go:117] "RemoveContainer" containerID="dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029" Jan 23 17:11:56 crc kubenswrapper[4606]: E0123 17:11:56.164149 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029\": container with ID starting with dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029 not found: ID does not exist" containerID="dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.164171 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029"} err="failed to get container status \"dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029\": rpc error: code = NotFound desc = could not find container \"dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029\": container with ID starting with dae658961ff65c2635fb30faa41dcc516715fb49a2f146e3f8ce62db078f1029 not found: ID does not exist" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.199059 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" (UID: "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.229818 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" (UID: "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.237939 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c593cf-ada1-4111-9b42-4e645428aeff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238099 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c593cf-ada1-4111-9b42-4e645428aeff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238230 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jpp7\" (UniqueName: \"kubernetes.io/projected/17c593cf-ada1-4111-9b42-4e645428aeff-kube-api-access-9jpp7\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238456 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238468 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238477 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-742cw\" (UniqueName: \"kubernetes.io/projected/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-kube-api-access-742cw\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238487 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238496 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.238876 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.281820 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-config-data" (OuterVolumeSpecName: "config-data") pod "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" (UID: "db66ddbd-3b53-4f4b-942f-87f16d0bc3fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.340604 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c593cf-ada1-4111-9b42-4e645428aeff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.340665 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c593cf-ada1-4111-9b42-4e645428aeff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.340728 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jpp7\" (UniqueName: \"kubernetes.io/projected/17c593cf-ada1-4111-9b42-4e645428aeff-kube-api-access-9jpp7\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.340807 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.344824 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c593cf-ada1-4111-9b42-4e645428aeff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.345372 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c593cf-ada1-4111-9b42-4e645428aeff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.356801 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jpp7\" (UniqueName: \"kubernetes.io/projected/17c593cf-ada1-4111-9b42-4e645428aeff-kube-api-access-9jpp7\") pod \"nova-cell0-conductor-0\" (UID: \"17c593cf-ada1-4111-9b42-4e645428aeff\") " pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.446366 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.463775 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.479058 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.486690 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.488826 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.491992 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.492134 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.495746 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.646681 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-scripts\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.647126 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-config-data\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.647175 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-log-httpd\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.647288 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-run-httpd\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.647316 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.647379 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.647411 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkqx\" (UniqueName: \"kubernetes.io/projected/40043888-693b-4833-9dad-f05ece6be64e-kube-api-access-stkqx\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.749727 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-run-httpd\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.749840 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.749925 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.750002 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkqx\" (UniqueName: \"kubernetes.io/projected/40043888-693b-4833-9dad-f05ece6be64e-kube-api-access-stkqx\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.750182 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-run-httpd\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.750192 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-scripts\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.750241 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-config-data\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.750408 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-log-httpd\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.751479 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-log-httpd\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.756438 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.757609 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-config-data\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.758697 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.761571 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-scripts\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.772921 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkqx\" (UniqueName: \"kubernetes.io/projected/40043888-693b-4833-9dad-f05ece6be64e-kube-api-access-stkqx\") pod \"ceilometer-0\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.808299 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:11:56 crc kubenswrapper[4606]: I0123 17:11:56.950154 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 17:11:57 crc kubenswrapper[4606]: I0123 17:11:57.030811 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"17c593cf-ada1-4111-9b42-4e645428aeff","Type":"ContainerStarted","Data":"d5a32f99d7ec61cf269da808b42b41888dec0cab54f80c91fb70f221f93a5413"} Jan 23 17:11:57 crc kubenswrapper[4606]: I0123 17:11:57.352818 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.043152 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"17c593cf-ada1-4111-9b42-4e645428aeff","Type":"ContainerStarted","Data":"a4ade32e5c689af6a600a53ae49aa8580b5c1aa97ae9a678bde2e872daf49015"} Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.045799 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.048232 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerStarted","Data":"728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888"} Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.048352 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerStarted","Data":"ab343f77667faf0827559801643c7037e1e2c3214ab4fb577a24a9cb0fd13632"} Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.072091 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.072076447 podStartE2EDuration="2.072076447s" podCreationTimestamp="2026-01-23 17:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:11:58.066856838 +0000 UTC m=+1152.067049264" watchObservedRunningTime="2026-01-23 17:11:58.072076447 +0000 UTC m=+1152.072268853" Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.392410 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db66ddbd-3b53-4f4b-942f-87f16d0bc3fc" path="/var/lib/kubelet/pods/db66ddbd-3b53-4f4b-942f-87f16d0bc3fc/volumes" Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.436729 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5b9775877b-mcc82" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 23 17:11:58 crc kubenswrapper[4606]: I0123 17:11:58.436911 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:11:59 crc kubenswrapper[4606]: I0123 17:11:59.058166 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerStarted","Data":"da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70"} Jan 23 17:12:00 crc kubenswrapper[4606]: I0123 17:12:00.068905 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerStarted","Data":"63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f"} Jan 23 17:12:01 crc kubenswrapper[4606]: I0123 17:12:01.081619 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerStarted","Data":"c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b"} Jan 23 17:12:01 crc kubenswrapper[4606]: I0123 17:12:01.082117 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.143338 4606 generic.go:334] "Generic (PLEG): container finished" podID="1dd13584-a1b9-4891-85a7-01afd519f746" containerID="a7070e36597fcdf8aa3834affbe73a3753b346f31476a5736d178e186a804eb1" exitCode=137 Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.143842 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerDied","Data":"a7070e36597fcdf8aa3834affbe73a3753b346f31476a5736d178e186a804eb1"} Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.143869 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b9775877b-mcc82" event={"ID":"1dd13584-a1b9-4891-85a7-01afd519f746","Type":"ContainerDied","Data":"b71fc2149df3ad2cbe56ff9d1d4f28f2fd41ef9e70a02a90abf73df172177bba"} Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.143879 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b71fc2149df3ad2cbe56ff9d1d4f28f2fd41ef9e70a02a90abf73df172177bba" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.201963 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.234161 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.833401579 podStartE2EDuration="10.234130552s" podCreationTimestamp="2026-01-23 17:11:56 +0000 UTC" firstStartedPulling="2026-01-23 17:11:57.340232811 +0000 UTC m=+1151.340425207" lastFinishedPulling="2026-01-23 17:12:00.740961754 +0000 UTC m=+1154.741154180" observedRunningTime="2026-01-23 17:12:01.117082713 +0000 UTC m=+1155.117275129" watchObservedRunningTime="2026-01-23 17:12:06.234130552 +0000 UTC m=+1160.234322948" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.351467 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdnw2\" (UniqueName: \"kubernetes.io/projected/1dd13584-a1b9-4891-85a7-01afd519f746-kube-api-access-vdnw2\") pod \"1dd13584-a1b9-4891-85a7-01afd519f746\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.351843 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd13584-a1b9-4891-85a7-01afd519f746-logs\") pod \"1dd13584-a1b9-4891-85a7-01afd519f746\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.351884 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-scripts\") pod \"1dd13584-a1b9-4891-85a7-01afd519f746\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.351992 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-combined-ca-bundle\") pod \"1dd13584-a1b9-4891-85a7-01afd519f746\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.352014 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-tls-certs\") pod \"1dd13584-a1b9-4891-85a7-01afd519f746\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.352041 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-config-data\") pod \"1dd13584-a1b9-4891-85a7-01afd519f746\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.352079 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-secret-key\") pod \"1dd13584-a1b9-4891-85a7-01afd519f746\" (UID: \"1dd13584-a1b9-4891-85a7-01afd519f746\") " Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.354918 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd13584-a1b9-4891-85a7-01afd519f746-logs" (OuterVolumeSpecName: "logs") pod "1dd13584-a1b9-4891-85a7-01afd519f746" (UID: "1dd13584-a1b9-4891-85a7-01afd519f746"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.367009 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd13584-a1b9-4891-85a7-01afd519f746-kube-api-access-vdnw2" (OuterVolumeSpecName: "kube-api-access-vdnw2") pod "1dd13584-a1b9-4891-85a7-01afd519f746" (UID: "1dd13584-a1b9-4891-85a7-01afd519f746"). InnerVolumeSpecName "kube-api-access-vdnw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.397325 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1dd13584-a1b9-4891-85a7-01afd519f746" (UID: "1dd13584-a1b9-4891-85a7-01afd519f746"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.417621 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-config-data" (OuterVolumeSpecName: "config-data") pod "1dd13584-a1b9-4891-85a7-01afd519f746" (UID: "1dd13584-a1b9-4891-85a7-01afd519f746"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.417706 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-scripts" (OuterVolumeSpecName: "scripts") pod "1dd13584-a1b9-4891-85a7-01afd519f746" (UID: "1dd13584-a1b9-4891-85a7-01afd519f746"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.428855 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dd13584-a1b9-4891-85a7-01afd519f746" (UID: "1dd13584-a1b9-4891-85a7-01afd519f746"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.453577 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.453621 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.453631 4606 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.453639 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdnw2\" (UniqueName: \"kubernetes.io/projected/1dd13584-a1b9-4891-85a7-01afd519f746-kube-api-access-vdnw2\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.453649 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dd13584-a1b9-4891-85a7-01afd519f746-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.453657 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1dd13584-a1b9-4891-85a7-01afd519f746-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.469700 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1dd13584-a1b9-4891-85a7-01afd519f746" (UID: "1dd13584-a1b9-4891-85a7-01afd519f746"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.511210 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.555429 4606 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1dd13584-a1b9-4891-85a7-01afd519f746-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.998087 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lqzlc"] Jan 23 17:12:06 crc kubenswrapper[4606]: E0123 17:12:06.998725 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.998749 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" Jan 23 17:12:06 crc kubenswrapper[4606]: E0123 17:12:06.998765 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon-log" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.998772 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon-log" Jan 23 17:12:06 crc kubenswrapper[4606]: E0123 17:12:06.998785 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.998793 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.999020 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.999056 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon" Jan 23 17:12:06 crc kubenswrapper[4606]: I0123 17:12:06.999067 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" containerName="horizon-log" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:06.999974 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.005508 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.005777 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.024234 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqzlc"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.066725 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79zd7\" (UniqueName: \"kubernetes.io/projected/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-kube-api-access-79zd7\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.066783 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-config-data\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.066843 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.066918 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-scripts\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.152020 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b9775877b-mcc82" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.168471 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-config-data\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.168552 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.168699 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-scripts\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.168765 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79zd7\" (UniqueName: \"kubernetes.io/projected/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-kube-api-access-79zd7\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.174678 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.175112 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-config-data\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.182563 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-scripts\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.194497 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79zd7\" (UniqueName: \"kubernetes.io/projected/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-kube-api-access-79zd7\") pod \"nova-cell0-cell-mapping-lqzlc\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.212671 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b9775877b-mcc82"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.226158 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b9775877b-mcc82"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.282288 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.284177 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.289352 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.330150 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.330487 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.373383 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-config-data\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.373519 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.373548 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhjw\" (UniqueName: \"kubernetes.io/projected/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-kube-api-access-7fhjw\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.433276 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.434850 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.442821 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.474276 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.476281 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-config-data\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.476388 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-config-data\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.476491 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsm9b\" (UniqueName: \"kubernetes.io/projected/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-kube-api-access-rsm9b\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.476608 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.476644 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fhjw\" (UniqueName: \"kubernetes.io/projected/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-kube-api-access-7fhjw\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.476678 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-logs\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.476717 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.482492 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-config-data\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.483433 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.525348 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fhjw\" (UniqueName: \"kubernetes.io/projected/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-kube-api-access-7fhjw\") pod \"nova-scheduler-0\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.577705 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsm9b\" (UniqueName: \"kubernetes.io/projected/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-kube-api-access-rsm9b\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.577769 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-logs\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.577795 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.577878 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-config-data\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.578771 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-logs\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.588231 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-config-data\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.594818 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.624021 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.635055 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsm9b\" (UniqueName: \"kubernetes.io/projected/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-kube-api-access-rsm9b\") pod \"nova-metadata-0\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.677207 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.682467 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.691272 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.781158 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.787104 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.787164 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-config-data\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.787314 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4tg7\" (UniqueName: \"kubernetes.io/projected/ba10cfdb-879f-426d-bda9-60cbd46e449c-kube-api-access-d4tg7\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.787410 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba10cfdb-879f-426d-bda9-60cbd46e449c-logs\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.796947 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-l26v9"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.856782 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.856929 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.897722 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.897974 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-config-data\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.898146 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4tg7\" (UniqueName: \"kubernetes.io/projected/ba10cfdb-879f-426d-bda9-60cbd46e449c-kube-api-access-d4tg7\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.898293 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba10cfdb-879f-426d-bda9-60cbd46e449c-logs\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.898794 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba10cfdb-879f-426d-bda9-60cbd46e449c-logs\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.910441 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.910508 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-l26v9"] Jan 23 17:12:07 crc kubenswrapper[4606]: I0123 17:12:07.944436 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-config-data\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:07.999988 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.001214 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.001347 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-config\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.001479 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.001561 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.001660 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nf97\" (UniqueName: \"kubernetes.io/projected/f2813a42-8848-431d-aee7-f6b572becd0f-kube-api-access-6nf97\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.071696 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.073310 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.106123 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.107664 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.130749 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nf97\" (UniqueName: \"kubernetes.io/projected/f2813a42-8848-431d-aee7-f6b572becd0f-kube-api-access-6nf97\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.108475 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.128976 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.131350 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.132280 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.122953 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4tg7\" (UniqueName: \"kubernetes.io/projected/ba10cfdb-879f-426d-bda9-60cbd46e449c-kube-api-access-d4tg7\") pod \"nova-api-0\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " pod="openstack/nova-api-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.132115 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.132607 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-config\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.132920 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.133332 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.133877 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.134455 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-config\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.198516 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nf97\" (UniqueName: \"kubernetes.io/projected/f2813a42-8848-431d-aee7-f6b572becd0f-kube-api-access-6nf97\") pod \"dnsmasq-dns-757b4f8459-l26v9\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.234723 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.235191 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6lk\" (UniqueName: \"kubernetes.io/projected/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-kube-api-access-pz6lk\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.235338 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.279897 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqzlc"] Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.338327 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.338498 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6lk\" (UniqueName: \"kubernetes.io/projected/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-kube-api-access-pz6lk\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.338705 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.351272 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.367238 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.367515 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.367975 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.385779 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd13584-a1b9-4891-85a7-01afd519f746" path="/var/lib/kubelet/pods/1dd13584-a1b9-4891-85a7-01afd519f746/volumes" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.387466 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6lk\" (UniqueName: \"kubernetes.io/projected/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-kube-api-access-pz6lk\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.466026 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.743209 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:08 crc kubenswrapper[4606]: W0123 17:12:08.768693 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bab3c2a_ca1c_4288_9854_787bab9b6eeb.slice/crio-f5bfff6958041a53a38685e651ff2bb8c505b5c62ce6eaf9b28f11a94cbb9f5f WatchSource:0}: Error finding container f5bfff6958041a53a38685e651ff2bb8c505b5c62ce6eaf9b28f11a94cbb9f5f: Status 404 returned error can't find the container with id f5bfff6958041a53a38685e651ff2bb8c505b5c62ce6eaf9b28f11a94cbb9f5f Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.795323 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:12:08 crc kubenswrapper[4606]: I0123 17:12:08.954906 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:09 crc kubenswrapper[4606]: W0123 17:12:09.134833 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba10cfdb_879f_426d_bda9_60cbd46e449c.slice/crio-214aafd5701c4ef30d29f03c1c15dddd245ee323710346d85900ccbf855ca97e WatchSource:0}: Error finding container 214aafd5701c4ef30d29f03c1c15dddd245ee323710346d85900ccbf855ca97e: Status 404 returned error can't find the container with id 214aafd5701c4ef30d29f03c1c15dddd245ee323710346d85900ccbf855ca97e Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.138705 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.200108 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-l26v9"] Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.201032 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqzlc" event={"ID":"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721","Type":"ContainerStarted","Data":"75c38c5318faa4f1d0090ee9d7aae13fce60b6e071df831d6f7ecd75532637a3"} Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.201092 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqzlc" event={"ID":"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721","Type":"ContainerStarted","Data":"ca7ff647b03aea6c75f67ffbd4fbef60671f14e1184134f9397de7923ffec1bd"} Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.205747 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba10cfdb-879f-426d-bda9-60cbd46e449c","Type":"ContainerStarted","Data":"214aafd5701c4ef30d29f03c1c15dddd245ee323710346d85900ccbf855ca97e"} Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.212553 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9","Type":"ContainerStarted","Data":"b4d2b1cf0d4e3560c67ac4f3cb24656d20bd871a9f7768b0cd2478aa7fb357cb"} Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.214929 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bab3c2a-ca1c-4288-9854-787bab9b6eeb","Type":"ContainerStarted","Data":"f5bfff6958041a53a38685e651ff2bb8c505b5c62ce6eaf9b28f11a94cbb9f5f"} Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.229262 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lqzlc" podStartSLOduration=3.229243208 podStartE2EDuration="3.229243208s" podCreationTimestamp="2026-01-23 17:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:09.219333664 +0000 UTC m=+1163.219526060" watchObservedRunningTime="2026-01-23 17:12:09.229243208 +0000 UTC m=+1163.229435604" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.240526 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" event={"ID":"f2813a42-8848-431d-aee7-f6b572becd0f","Type":"ContainerStarted","Data":"0833eaaf39332a56562ef1de5cbe092de3e5049ad14b5829973c2af80ecaea45"} Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.305455 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.330428 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m76hq"] Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.331897 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.336140 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.336349 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.344697 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m76hq"] Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.397927 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.397982 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfqgw\" (UniqueName: \"kubernetes.io/projected/d4b4e288-0565-452e-b141-fa028539d66e-kube-api-access-mfqgw\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.398086 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-config-data\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.398218 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-scripts\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.502619 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-scripts\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.502803 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.502860 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfqgw\" (UniqueName: \"kubernetes.io/projected/d4b4e288-0565-452e-b141-fa028539d66e-kube-api-access-mfqgw\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.502947 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-config-data\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.508240 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-scripts\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.508394 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-config-data\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.508524 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.519959 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfqgw\" (UniqueName: \"kubernetes.io/projected/d4b4e288-0565-452e-b141-fa028539d66e-kube-api-access-mfqgw\") pod \"nova-cell1-conductor-db-sync-m76hq\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:09 crc kubenswrapper[4606]: I0123 17:12:09.710160 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:10 crc kubenswrapper[4606]: I0123 17:12:10.257066 4606 generic.go:334] "Generic (PLEG): container finished" podID="f2813a42-8848-431d-aee7-f6b572becd0f" containerID="68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b" exitCode=0 Jan 23 17:12:10 crc kubenswrapper[4606]: I0123 17:12:10.257371 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" event={"ID":"f2813a42-8848-431d-aee7-f6b572becd0f","Type":"ContainerDied","Data":"68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b"} Jan 23 17:12:10 crc kubenswrapper[4606]: I0123 17:12:10.268321 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8","Type":"ContainerStarted","Data":"18969011dd8115f1d2090d9cf4db81f4de425c4989c1c9690cc29512cff25d06"} Jan 23 17:12:10 crc kubenswrapper[4606]: I0123 17:12:10.344548 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m76hq"] Jan 23 17:12:11 crc kubenswrapper[4606]: I0123 17:12:11.278766 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m76hq" event={"ID":"d4b4e288-0565-452e-b141-fa028539d66e","Type":"ContainerStarted","Data":"3854ace33dfd0bb7bdd347ac91d1e5a7ea49037dd59592e0c5af03df237e0727"} Jan 23 17:12:11 crc kubenswrapper[4606]: I0123 17:12:11.572814 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:11 crc kubenswrapper[4606]: I0123 17:12:11.596620 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.305123 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" event={"ID":"f2813a42-8848-431d-aee7-f6b572becd0f","Type":"ContainerStarted","Data":"bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.307001 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.308265 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m76hq" event={"ID":"d4b4e288-0565-452e-b141-fa028539d66e","Type":"ContainerStarted","Data":"95d4a6b67ebafbaafe1a527206a9b06c06bd17fab3abfd11617056641f14588a"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.310161 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8","Type":"ContainerStarted","Data":"a458d8b582e950671fbfa8a4d5b629a0b68b6976404bbf4aa757a37a5c31eaea"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.310282 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a458d8b582e950671fbfa8a4d5b629a0b68b6976404bbf4aa757a37a5c31eaea" gracePeriod=30 Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.329038 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" podStartSLOduration=6.329016516 podStartE2EDuration="6.329016516s" podCreationTimestamp="2026-01-23 17:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:13.326583608 +0000 UTC m=+1167.326776014" watchObservedRunningTime="2026-01-23 17:12:13.329016516 +0000 UTC m=+1167.329208922" Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.333901 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba10cfdb-879f-426d-bda9-60cbd46e449c","Type":"ContainerStarted","Data":"cb7489f0a801d1f56f3056dd0bc4a1d84bdf237032db8c20e01d7e6d36da210c"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.333980 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba10cfdb-879f-426d-bda9-60cbd46e449c","Type":"ContainerStarted","Data":"b053a1cca7853d500963d30d9d52ba513f27dc44de30d7766f3e887879dc16d3"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.337772 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9","Type":"ContainerStarted","Data":"54c6c49f29dceda7652d45f5097889cd07b5b19d2f7ba1d24b601f28876adc7a"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.337817 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9","Type":"ContainerStarted","Data":"23282f0a51e2fd2d447d8b236935dbc36e298fcac4d384dea1370f3ae3629bbd"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.337860 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-log" containerID="cri-o://23282f0a51e2fd2d447d8b236935dbc36e298fcac4d384dea1370f3ae3629bbd" gracePeriod=30 Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.337881 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-metadata" containerID="cri-o://54c6c49f29dceda7652d45f5097889cd07b5b19d2f7ba1d24b601f28876adc7a" gracePeriod=30 Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.343295 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bab3c2a-ca1c-4288-9854-787bab9b6eeb","Type":"ContainerStarted","Data":"84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e"} Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.352693 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.230936873 podStartE2EDuration="6.352671283s" podCreationTimestamp="2026-01-23 17:12:07 +0000 UTC" firstStartedPulling="2026-01-23 17:12:09.282092062 +0000 UTC m=+1163.282284468" lastFinishedPulling="2026-01-23 17:12:12.403826472 +0000 UTC m=+1166.404018878" observedRunningTime="2026-01-23 17:12:13.351453364 +0000 UTC m=+1167.351645780" watchObservedRunningTime="2026-01-23 17:12:13.352671283 +0000 UTC m=+1167.352863679" Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.379704 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-m76hq" podStartSLOduration=4.379678531 podStartE2EDuration="4.379678531s" podCreationTimestamp="2026-01-23 17:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:13.376647718 +0000 UTC m=+1167.376840124" watchObservedRunningTime="2026-01-23 17:12:13.379678531 +0000 UTC m=+1167.379870937" Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.404039 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.791976021 podStartE2EDuration="6.404019985s" podCreationTimestamp="2026-01-23 17:12:07 +0000 UTC" firstStartedPulling="2026-01-23 17:12:08.795027605 +0000 UTC m=+1162.795220011" lastFinishedPulling="2026-01-23 17:12:12.407071569 +0000 UTC m=+1166.407263975" observedRunningTime="2026-01-23 17:12:13.399953977 +0000 UTC m=+1167.400146383" watchObservedRunningTime="2026-01-23 17:12:13.404019985 +0000 UTC m=+1167.404212391" Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.423617 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.98754783 podStartE2EDuration="6.423601474s" podCreationTimestamp="2026-01-23 17:12:07 +0000 UTC" firstStartedPulling="2026-01-23 17:12:08.967755547 +0000 UTC m=+1162.967947953" lastFinishedPulling="2026-01-23 17:12:12.403809191 +0000 UTC m=+1166.404001597" observedRunningTime="2026-01-23 17:12:13.421396241 +0000 UTC m=+1167.421588647" watchObservedRunningTime="2026-01-23 17:12:13.423601474 +0000 UTC m=+1167.423793880" Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.441774 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.166643813 podStartE2EDuration="6.44175181s" podCreationTimestamp="2026-01-23 17:12:07 +0000 UTC" firstStartedPulling="2026-01-23 17:12:09.136958041 +0000 UTC m=+1163.137150447" lastFinishedPulling="2026-01-23 17:12:12.412066038 +0000 UTC m=+1166.412258444" observedRunningTime="2026-01-23 17:12:13.438890131 +0000 UTC m=+1167.439082537" watchObservedRunningTime="2026-01-23 17:12:13.44175181 +0000 UTC m=+1167.441944216" Jan 23 17:12:13 crc kubenswrapper[4606]: I0123 17:12:13.467700 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:14 crc kubenswrapper[4606]: I0123 17:12:14.354441 4606 generic.go:334] "Generic (PLEG): container finished" podID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerID="23282f0a51e2fd2d447d8b236935dbc36e298fcac4d384dea1370f3ae3629bbd" exitCode=143 Jan 23 17:12:14 crc kubenswrapper[4606]: I0123 17:12:14.354501 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9","Type":"ContainerDied","Data":"23282f0a51e2fd2d447d8b236935dbc36e298fcac4d384dea1370f3ae3629bbd"} Jan 23 17:12:17 crc kubenswrapper[4606]: I0123 17:12:17.625397 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 17:12:17 crc kubenswrapper[4606]: I0123 17:12:17.627194 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 17:12:17 crc kubenswrapper[4606]: I0123 17:12:17.677140 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 17:12:17 crc kubenswrapper[4606]: I0123 17:12:17.782739 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 17:12:17 crc kubenswrapper[4606]: I0123 17:12:17.782791 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.380600 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.380979 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.381020 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.409797 4606 generic.go:334] "Generic (PLEG): container finished" podID="4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" containerID="75c38c5318faa4f1d0090ee9d7aae13fce60b6e071df831d6f7ecd75532637a3" exitCode=0 Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.409962 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqzlc" event={"ID":"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721","Type":"ContainerDied","Data":"75c38c5318faa4f1d0090ee9d7aae13fce60b6e071df831d6f7ecd75532637a3"} Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.480133 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j54ls"] Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.480400 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" podUID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerName="dnsmasq-dns" containerID="cri-o://5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0" gracePeriod=10 Jan 23 17:12:18 crc kubenswrapper[4606]: I0123 17:12:18.482635 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.125171 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.162173 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcnbg\" (UniqueName: \"kubernetes.io/projected/bd37ba52-7ce1-4151-8764-14a60ee315bc-kube-api-access-zcnbg\") pod \"bd37ba52-7ce1-4151-8764-14a60ee315bc\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.162298 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-swift-storage-0\") pod \"bd37ba52-7ce1-4151-8764-14a60ee315bc\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.162329 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-sb\") pod \"bd37ba52-7ce1-4151-8764-14a60ee315bc\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.162366 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-svc\") pod \"bd37ba52-7ce1-4151-8764-14a60ee315bc\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.162486 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-nb\") pod \"bd37ba52-7ce1-4151-8764-14a60ee315bc\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.162511 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-config\") pod \"bd37ba52-7ce1-4151-8764-14a60ee315bc\" (UID: \"bd37ba52-7ce1-4151-8764-14a60ee315bc\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.256225 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd37ba52-7ce1-4151-8764-14a60ee315bc-kube-api-access-zcnbg" (OuterVolumeSpecName: "kube-api-access-zcnbg") pod "bd37ba52-7ce1-4151-8764-14a60ee315bc" (UID: "bd37ba52-7ce1-4151-8764-14a60ee315bc"). InnerVolumeSpecName "kube-api-access-zcnbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.264361 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcnbg\" (UniqueName: \"kubernetes.io/projected/bd37ba52-7ce1-4151-8764-14a60ee315bc-kube-api-access-zcnbg\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.353734 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd37ba52-7ce1-4151-8764-14a60ee315bc" (UID: "bd37ba52-7ce1-4151-8764-14a60ee315bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.358016 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-config" (OuterVolumeSpecName: "config") pod "bd37ba52-7ce1-4151-8764-14a60ee315bc" (UID: "bd37ba52-7ce1-4151-8764-14a60ee315bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.365814 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.366027 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.387064 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd37ba52-7ce1-4151-8764-14a60ee315bc" (UID: "bd37ba52-7ce1-4151-8764-14a60ee315bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.405153 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd37ba52-7ce1-4151-8764-14a60ee315bc" (UID: "bd37ba52-7ce1-4151-8764-14a60ee315bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.416010 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd37ba52-7ce1-4151-8764-14a60ee315bc" (UID: "bd37ba52-7ce1-4151-8764-14a60ee315bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.444747 4606 generic.go:334] "Generic (PLEG): container finished" podID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerID="5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0" exitCode=0 Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.445767 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.453816 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" event={"ID":"bd37ba52-7ce1-4151-8764-14a60ee315bc","Type":"ContainerDied","Data":"5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0"} Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.453892 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-j54ls" event={"ID":"bd37ba52-7ce1-4151-8764-14a60ee315bc","Type":"ContainerDied","Data":"d927c839ed3b302e67c388acc71a3bc4bdbfea8c123b72d19ada8402d1f8c25f"} Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.454124 4606 scope.go:117] "RemoveContainer" containerID="5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.454495 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.454895 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.470315 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.470362 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.470376 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd37ba52-7ce1-4151-8764-14a60ee315bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.524795 4606 scope.go:117] "RemoveContainer" containerID="65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.585763 4606 scope.go:117] "RemoveContainer" containerID="5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0" Jan 23 17:12:19 crc kubenswrapper[4606]: E0123 17:12:19.586227 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0\": container with ID starting with 5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0 not found: ID does not exist" containerID="5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.586264 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0"} err="failed to get container status \"5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0\": rpc error: code = NotFound desc = could not find container \"5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0\": container with ID starting with 5b962c6a7f2e043d35614b415307bbe01f336609a38fcab54a4344d8a213d4a0 not found: ID does not exist" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.586293 4606 scope.go:117] "RemoveContainer" containerID="65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7" Jan 23 17:12:19 crc kubenswrapper[4606]: E0123 17:12:19.586567 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7\": container with ID starting with 65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7 not found: ID does not exist" containerID="65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.586609 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7"} err="failed to get container status \"65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7\": rpc error: code = NotFound desc = could not find container \"65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7\": container with ID starting with 65766a0319527437183a8ab55d7e18c87eea42963d9d31486fa6da8d44f611d7 not found: ID does not exist" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.622943 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j54ls"] Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.656643 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-j54ls"] Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.947233 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.979447 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-combined-ca-bundle\") pod \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.979561 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79zd7\" (UniqueName: \"kubernetes.io/projected/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-kube-api-access-79zd7\") pod \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.980025 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-config-data\") pod \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.980141 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-scripts\") pod \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\" (UID: \"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721\") " Jan 23 17:12:19 crc kubenswrapper[4606]: I0123 17:12:19.988787 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-kube-api-access-79zd7" (OuterVolumeSpecName: "kube-api-access-79zd7") pod "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" (UID: "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721"). InnerVolumeSpecName "kube-api-access-79zd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.009815 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-scripts" (OuterVolumeSpecName: "scripts") pod "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" (UID: "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.024119 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" (UID: "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.047423 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-config-data" (OuterVolumeSpecName: "config-data") pod "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" (UID: "4cc7ebc9-1a3e-41e8-a0c3-269d425ec721"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.085169 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79zd7\" (UniqueName: \"kubernetes.io/projected/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-kube-api-access-79zd7\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.085215 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.085229 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.085241 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.375051 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd37ba52-7ce1-4151-8764-14a60ee315bc" path="/var/lib/kubelet/pods/bd37ba52-7ce1-4151-8764-14a60ee315bc/volumes" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.455780 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lqzlc" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.455768 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lqzlc" event={"ID":"4cc7ebc9-1a3e-41e8-a0c3-269d425ec721","Type":"ContainerDied","Data":"ca7ff647b03aea6c75f67ffbd4fbef60671f14e1184134f9397de7923ffec1bd"} Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.455852 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7ff647b03aea6c75f67ffbd4fbef60671f14e1184134f9397de7923ffec1bd" Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.659475 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.703288 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.703533 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-log" containerID="cri-o://b053a1cca7853d500963d30d9d52ba513f27dc44de30d7766f3e887879dc16d3" gracePeriod=30 Jan 23 17:12:20 crc kubenswrapper[4606]: I0123 17:12:20.703636 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-api" containerID="cri-o://cb7489f0a801d1f56f3056dd0bc4a1d84bdf237032db8c20e01d7e6d36da210c" gracePeriod=30 Jan 23 17:12:21 crc kubenswrapper[4606]: I0123 17:12:21.473199 4606 generic.go:334] "Generic (PLEG): container finished" podID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerID="b053a1cca7853d500963d30d9d52ba513f27dc44de30d7766f3e887879dc16d3" exitCode=143 Jan 23 17:12:21 crc kubenswrapper[4606]: I0123 17:12:21.473478 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba10cfdb-879f-426d-bda9-60cbd46e449c","Type":"ContainerDied","Data":"b053a1cca7853d500963d30d9d52ba513f27dc44de30d7766f3e887879dc16d3"} Jan 23 17:12:21 crc kubenswrapper[4606]: I0123 17:12:21.475072 4606 generic.go:334] "Generic (PLEG): container finished" podID="d4b4e288-0565-452e-b141-fa028539d66e" containerID="95d4a6b67ebafbaafe1a527206a9b06c06bd17fab3abfd11617056641f14588a" exitCode=0 Jan 23 17:12:21 crc kubenswrapper[4606]: I0123 17:12:21.475206 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6bab3c2a-ca1c-4288-9854-787bab9b6eeb" containerName="nova-scheduler-scheduler" containerID="cri-o://84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" gracePeriod=30 Jan 23 17:12:21 crc kubenswrapper[4606]: I0123 17:12:21.475436 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m76hq" event={"ID":"d4b4e288-0565-452e-b141-fa028539d66e","Type":"ContainerDied","Data":"95d4a6b67ebafbaafe1a527206a9b06c06bd17fab3abfd11617056641f14588a"} Jan 23 17:12:22 crc kubenswrapper[4606]: E0123 17:12:22.627360 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 17:12:22 crc kubenswrapper[4606]: E0123 17:12:22.629108 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 17:12:22 crc kubenswrapper[4606]: E0123 17:12:22.630545 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 17:12:22 crc kubenswrapper[4606]: E0123 17:12:22.630581 4606 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6bab3c2a-ca1c-4288-9854-787bab9b6eeb" containerName="nova-scheduler-scheduler" Jan 23 17:12:22 crc kubenswrapper[4606]: I0123 17:12:22.901669 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:22 crc kubenswrapper[4606]: I0123 17:12:22.946395 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-combined-ca-bundle\") pod \"d4b4e288-0565-452e-b141-fa028539d66e\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " Jan 23 17:12:22 crc kubenswrapper[4606]: I0123 17:12:22.946606 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfqgw\" (UniqueName: \"kubernetes.io/projected/d4b4e288-0565-452e-b141-fa028539d66e-kube-api-access-mfqgw\") pod \"d4b4e288-0565-452e-b141-fa028539d66e\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " Jan 23 17:12:22 crc kubenswrapper[4606]: I0123 17:12:22.946711 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-scripts\") pod \"d4b4e288-0565-452e-b141-fa028539d66e\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " Jan 23 17:12:22 crc kubenswrapper[4606]: I0123 17:12:22.946739 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-config-data\") pod \"d4b4e288-0565-452e-b141-fa028539d66e\" (UID: \"d4b4e288-0565-452e-b141-fa028539d66e\") " Jan 23 17:12:22 crc kubenswrapper[4606]: I0123 17:12:22.969621 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-scripts" (OuterVolumeSpecName: "scripts") pod "d4b4e288-0565-452e-b141-fa028539d66e" (UID: "d4b4e288-0565-452e-b141-fa028539d66e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:22 crc kubenswrapper[4606]: I0123 17:12:22.991473 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b4e288-0565-452e-b141-fa028539d66e-kube-api-access-mfqgw" (OuterVolumeSpecName: "kube-api-access-mfqgw") pod "d4b4e288-0565-452e-b141-fa028539d66e" (UID: "d4b4e288-0565-452e-b141-fa028539d66e"). InnerVolumeSpecName "kube-api-access-mfqgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.002867 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-config-data" (OuterVolumeSpecName: "config-data") pod "d4b4e288-0565-452e-b141-fa028539d66e" (UID: "d4b4e288-0565-452e-b141-fa028539d66e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.004048 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4b4e288-0565-452e-b141-fa028539d66e" (UID: "d4b4e288-0565-452e-b141-fa028539d66e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.055029 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfqgw\" (UniqueName: \"kubernetes.io/projected/d4b4e288-0565-452e-b141-fa028539d66e-kube-api-access-mfqgw\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.055081 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.055097 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.055110 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b4e288-0565-452e-b141-fa028539d66e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.496066 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m76hq" event={"ID":"d4b4e288-0565-452e-b141-fa028539d66e","Type":"ContainerDied","Data":"3854ace33dfd0bb7bdd347ac91d1e5a7ea49037dd59592e0c5af03df237e0727"} Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.496444 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3854ace33dfd0bb7bdd347ac91d1e5a7ea49037dd59592e0c5af03df237e0727" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.496143 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m76hq" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.602780 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 17:12:23 crc kubenswrapper[4606]: E0123 17:12:23.603180 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerName="init" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.603197 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerName="init" Jan 23 17:12:23 crc kubenswrapper[4606]: E0123 17:12:23.603208 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerName="dnsmasq-dns" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.603214 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerName="dnsmasq-dns" Jan 23 17:12:23 crc kubenswrapper[4606]: E0123 17:12:23.603227 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b4e288-0565-452e-b141-fa028539d66e" containerName="nova-cell1-conductor-db-sync" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.603233 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b4e288-0565-452e-b141-fa028539d66e" containerName="nova-cell1-conductor-db-sync" Jan 23 17:12:23 crc kubenswrapper[4606]: E0123 17:12:23.603245 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" containerName="nova-manage" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.603250 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" containerName="nova-manage" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.603417 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd37ba52-7ce1-4151-8764-14a60ee315bc" containerName="dnsmasq-dns" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.603448 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b4e288-0565-452e-b141-fa028539d66e" containerName="nova-cell1-conductor-db-sync" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.603461 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" containerName="nova-manage" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.604042 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.609549 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.620088 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.667030 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6200263b-ab5c-41a9-a8cf-d44d8e6db987-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.667082 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq4rs\" (UniqueName: \"kubernetes.io/projected/6200263b-ab5c-41a9-a8cf-d44d8e6db987-kube-api-access-lq4rs\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.667170 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6200263b-ab5c-41a9-a8cf-d44d8e6db987-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.770010 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6200263b-ab5c-41a9-a8cf-d44d8e6db987-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.770370 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4rs\" (UniqueName: \"kubernetes.io/projected/6200263b-ab5c-41a9-a8cf-d44d8e6db987-kube-api-access-lq4rs\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.770577 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6200263b-ab5c-41a9-a8cf-d44d8e6db987-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.777824 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6200263b-ab5c-41a9-a8cf-d44d8e6db987-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.779735 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6200263b-ab5c-41a9-a8cf-d44d8e6db987-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.806518 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4rs\" (UniqueName: \"kubernetes.io/projected/6200263b-ab5c-41a9-a8cf-d44d8e6db987-kube-api-access-lq4rs\") pod \"nova-cell1-conductor-0\" (UID: \"6200263b-ab5c-41a9-a8cf-d44d8e6db987\") " pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:23 crc kubenswrapper[4606]: I0123 17:12:23.933825 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:24 crc kubenswrapper[4606]: I0123 17:12:24.462845 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 17:12:24 crc kubenswrapper[4606]: W0123 17:12:24.474005 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6200263b_ab5c_41a9_a8cf_d44d8e6db987.slice/crio-e757625809f6d3187649079e14360a35cc00a71f5c1c7e2fdc2d1793eb85e1e1 WatchSource:0}: Error finding container e757625809f6d3187649079e14360a35cc00a71f5c1c7e2fdc2d1793eb85e1e1: Status 404 returned error can't find the container with id e757625809f6d3187649079e14360a35cc00a71f5c1c7e2fdc2d1793eb85e1e1 Jan 23 17:12:24 crc kubenswrapper[4606]: I0123 17:12:24.515085 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6200263b-ab5c-41a9-a8cf-d44d8e6db987","Type":"ContainerStarted","Data":"e757625809f6d3187649079e14360a35cc00a71f5c1c7e2fdc2d1793eb85e1e1"} Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.080547 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.197744 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-config-data\") pod \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.197895 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-combined-ca-bundle\") pod \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.198023 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fhjw\" (UniqueName: \"kubernetes.io/projected/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-kube-api-access-7fhjw\") pod \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\" (UID: \"6bab3c2a-ca1c-4288-9854-787bab9b6eeb\") " Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.202322 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-kube-api-access-7fhjw" (OuterVolumeSpecName: "kube-api-access-7fhjw") pod "6bab3c2a-ca1c-4288-9854-787bab9b6eeb" (UID: "6bab3c2a-ca1c-4288-9854-787bab9b6eeb"). InnerVolumeSpecName "kube-api-access-7fhjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.223604 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bab3c2a-ca1c-4288-9854-787bab9b6eeb" (UID: "6bab3c2a-ca1c-4288-9854-787bab9b6eeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.228869 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-config-data" (OuterVolumeSpecName: "config-data") pod "6bab3c2a-ca1c-4288-9854-787bab9b6eeb" (UID: "6bab3c2a-ca1c-4288-9854-787bab9b6eeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.300557 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.300847 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fhjw\" (UniqueName: \"kubernetes.io/projected/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-kube-api-access-7fhjw\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.300975 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bab3c2a-ca1c-4288-9854-787bab9b6eeb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.526064 4606 generic.go:334] "Generic (PLEG): container finished" podID="6bab3c2a-ca1c-4288-9854-787bab9b6eeb" containerID="84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" exitCode=0 Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.526117 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.526139 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bab3c2a-ca1c-4288-9854-787bab9b6eeb","Type":"ContainerDied","Data":"84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e"} Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.528825 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bab3c2a-ca1c-4288-9854-787bab9b6eeb","Type":"ContainerDied","Data":"f5bfff6958041a53a38685e651ff2bb8c505b5c62ce6eaf9b28f11a94cbb9f5f"} Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.528909 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"6200263b-ab5c-41a9-a8cf-d44d8e6db987","Type":"ContainerStarted","Data":"b6a4f8a0b280ce84062836470990b1f55bf82d721392effecf6c6370cafc21a3"} Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.528941 4606 scope.go:117] "RemoveContainer" containerID="84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.529053 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.558773 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.558743491 podStartE2EDuration="2.558743491s" podCreationTimestamp="2026-01-23 17:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:25.555623177 +0000 UTC m=+1179.555815583" watchObservedRunningTime="2026-01-23 17:12:25.558743491 +0000 UTC m=+1179.558935917" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.560903 4606 scope.go:117] "RemoveContainer" containerID="84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" Jan 23 17:12:25 crc kubenswrapper[4606]: E0123 17:12:25.561609 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e\": container with ID starting with 84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e not found: ID does not exist" containerID="84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.561646 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e"} err="failed to get container status \"84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e\": rpc error: code = NotFound desc = could not find container \"84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e\": container with ID starting with 84bfda0bb7f587d2d5cf45a3743a7077a970d71283888e6f625dc3d64e8c5a1e not found: ID does not exist" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.596557 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.608505 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.624552 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:25 crc kubenswrapper[4606]: E0123 17:12:25.625616 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bab3c2a-ca1c-4288-9854-787bab9b6eeb" containerName="nova-scheduler-scheduler" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.625635 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bab3c2a-ca1c-4288-9854-787bab9b6eeb" containerName="nova-scheduler-scheduler" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.637900 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bab3c2a-ca1c-4288-9854-787bab9b6eeb" containerName="nova-scheduler-scheduler" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.639818 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.644367 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.696489 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.707667 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv9bg\" (UniqueName: \"kubernetes.io/projected/fa0989ad-8e77-483c-93a2-ce76f542e610-kube-api-access-zv9bg\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.707925 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.708058 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-config-data\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: E0123 17:12:25.770320 4606 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bab3c2a_ca1c_4288_9854_787bab9b6eeb.slice\": RecentStats: unable to find data in memory cache]" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.810277 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-config-data\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.810511 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv9bg\" (UniqueName: \"kubernetes.io/projected/fa0989ad-8e77-483c-93a2-ce76f542e610-kube-api-access-zv9bg\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.810569 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.816920 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-config-data\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.818299 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.836572 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv9bg\" (UniqueName: \"kubernetes.io/projected/fa0989ad-8e77-483c-93a2-ce76f542e610-kube-api-access-zv9bg\") pod \"nova-scheduler-0\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " pod="openstack/nova-scheduler-0" Jan 23 17:12:25 crc kubenswrapper[4606]: I0123 17:12:25.962047 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.379478 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bab3c2a-ca1c-4288-9854-787bab9b6eeb" path="/var/lib/kubelet/pods/6bab3c2a-ca1c-4288-9854-787bab9b6eeb/volumes" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.429429 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.542769 4606 generic.go:334] "Generic (PLEG): container finished" podID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerID="cb7489f0a801d1f56f3056dd0bc4a1d84bdf237032db8c20e01d7e6d36da210c" exitCode=0 Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.542841 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba10cfdb-879f-426d-bda9-60cbd46e449c","Type":"ContainerDied","Data":"cb7489f0a801d1f56f3056dd0bc4a1d84bdf237032db8c20e01d7e6d36da210c"} Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.542881 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ba10cfdb-879f-426d-bda9-60cbd46e449c","Type":"ContainerDied","Data":"214aafd5701c4ef30d29f03c1c15dddd245ee323710346d85900ccbf855ca97e"} Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.542897 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="214aafd5701c4ef30d29f03c1c15dddd245ee323710346d85900ccbf855ca97e" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.546387 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa0989ad-8e77-483c-93a2-ce76f542e610","Type":"ContainerStarted","Data":"5db9177e3dc733b78acb18f8411fe55c750f30c03be15b78007ea0f1c7f3dd13"} Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.547470 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.627153 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba10cfdb-879f-426d-bda9-60cbd46e449c-logs\") pod \"ba10cfdb-879f-426d-bda9-60cbd46e449c\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.627200 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tg7\" (UniqueName: \"kubernetes.io/projected/ba10cfdb-879f-426d-bda9-60cbd46e449c-kube-api-access-d4tg7\") pod \"ba10cfdb-879f-426d-bda9-60cbd46e449c\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.627330 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-config-data\") pod \"ba10cfdb-879f-426d-bda9-60cbd46e449c\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.627413 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-combined-ca-bundle\") pod \"ba10cfdb-879f-426d-bda9-60cbd46e449c\" (UID: \"ba10cfdb-879f-426d-bda9-60cbd46e449c\") " Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.628141 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba10cfdb-879f-426d-bda9-60cbd46e449c-logs" (OuterVolumeSpecName: "logs") pod "ba10cfdb-879f-426d-bda9-60cbd46e449c" (UID: "ba10cfdb-879f-426d-bda9-60cbd46e449c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.632435 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba10cfdb-879f-426d-bda9-60cbd46e449c-kube-api-access-d4tg7" (OuterVolumeSpecName: "kube-api-access-d4tg7") pod "ba10cfdb-879f-426d-bda9-60cbd46e449c" (UID: "ba10cfdb-879f-426d-bda9-60cbd46e449c"). InnerVolumeSpecName "kube-api-access-d4tg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.670235 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba10cfdb-879f-426d-bda9-60cbd46e449c" (UID: "ba10cfdb-879f-426d-bda9-60cbd46e449c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.672355 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-config-data" (OuterVolumeSpecName: "config-data") pod "ba10cfdb-879f-426d-bda9-60cbd46e449c" (UID: "ba10cfdb-879f-426d-bda9-60cbd46e449c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.730322 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.730384 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba10cfdb-879f-426d-bda9-60cbd46e449c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.730405 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba10cfdb-879f-426d-bda9-60cbd46e449c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.730421 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4tg7\" (UniqueName: \"kubernetes.io/projected/ba10cfdb-879f-426d-bda9-60cbd46e449c-kube-api-access-d4tg7\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:26 crc kubenswrapper[4606]: I0123 17:12:26.815730 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.555498 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.556894 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa0989ad-8e77-483c-93a2-ce76f542e610","Type":"ContainerStarted","Data":"50683fe2a869323608f2b96616f87d80e75a7739e0e4772c5568a969ac813c77"} Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.582859 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.582837777 podStartE2EDuration="2.582837777s" podCreationTimestamp="2026-01-23 17:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:27.580266095 +0000 UTC m=+1181.580458511" watchObservedRunningTime="2026-01-23 17:12:27.582837777 +0000 UTC m=+1181.583030183" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.610490 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.624298 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.646011 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:27 crc kubenswrapper[4606]: E0123 17:12:27.646485 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-log" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.646501 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-log" Jan 23 17:12:27 crc kubenswrapper[4606]: E0123 17:12:27.646526 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-api" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.646532 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-api" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.651474 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-api" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.651529 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" containerName="nova-api-log" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.652762 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.655227 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.663188 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.749189 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-config-data\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.749237 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9qg\" (UniqueName: \"kubernetes.io/projected/27bd06df-ec76-4e9d-92bc-fa3355de75fb-kube-api-access-6h9qg\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.749265 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27bd06df-ec76-4e9d-92bc-fa3355de75fb-logs\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.749297 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.851634 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27bd06df-ec76-4e9d-92bc-fa3355de75fb-logs\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.852138 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27bd06df-ec76-4e9d-92bc-fa3355de75fb-logs\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.852323 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.853198 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-config-data\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.853257 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h9qg\" (UniqueName: \"kubernetes.io/projected/27bd06df-ec76-4e9d-92bc-fa3355de75fb-kube-api-access-6h9qg\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.859341 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-config-data\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.870489 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.876219 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h9qg\" (UniqueName: \"kubernetes.io/projected/27bd06df-ec76-4e9d-92bc-fa3355de75fb-kube-api-access-6h9qg\") pod \"nova-api-0\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " pod="openstack/nova-api-0" Jan 23 17:12:27 crc kubenswrapper[4606]: I0123 17:12:27.981536 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:28 crc kubenswrapper[4606]: I0123 17:12:28.375094 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba10cfdb-879f-426d-bda9-60cbd46e449c" path="/var/lib/kubelet/pods/ba10cfdb-879f-426d-bda9-60cbd46e449c/volumes" Jan 23 17:12:28 crc kubenswrapper[4606]: W0123 17:12:28.430475 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27bd06df_ec76_4e9d_92bc_fa3355de75fb.slice/crio-cab5081bf1c32972a5752493a8f886dfe2d00f21f121ccd2ff2cda2382fe0457 WatchSource:0}: Error finding container cab5081bf1c32972a5752493a8f886dfe2d00f21f121ccd2ff2cda2382fe0457: Status 404 returned error can't find the container with id cab5081bf1c32972a5752493a8f886dfe2d00f21f121ccd2ff2cda2382fe0457 Jan 23 17:12:28 crc kubenswrapper[4606]: I0123 17:12:28.430535 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:28 crc kubenswrapper[4606]: I0123 17:12:28.566517 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27bd06df-ec76-4e9d-92bc-fa3355de75fb","Type":"ContainerStarted","Data":"cab5081bf1c32972a5752493a8f886dfe2d00f21f121ccd2ff2cda2382fe0457"} Jan 23 17:12:29 crc kubenswrapper[4606]: I0123 17:12:29.577662 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27bd06df-ec76-4e9d-92bc-fa3355de75fb","Type":"ContainerStarted","Data":"5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c"} Jan 23 17:12:29 crc kubenswrapper[4606]: I0123 17:12:29.577965 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27bd06df-ec76-4e9d-92bc-fa3355de75fb","Type":"ContainerStarted","Data":"e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391"} Jan 23 17:12:29 crc kubenswrapper[4606]: I0123 17:12:29.604127 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.604100164 podStartE2EDuration="2.604100164s" podCreationTimestamp="2026-01-23 17:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:29.594172706 +0000 UTC m=+1183.594365122" watchObservedRunningTime="2026-01-23 17:12:29.604100164 +0000 UTC m=+1183.604292570" Jan 23 17:12:30 crc kubenswrapper[4606]: I0123 17:12:30.929964 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:12:30 crc kubenswrapper[4606]: I0123 17:12:30.930802 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="49d23d39-bb62-4d74-9053-c98f530d7208" containerName="kube-state-metrics" containerID="cri-o://31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204" gracePeriod=30 Jan 23 17:12:30 crc kubenswrapper[4606]: I0123 17:12:30.963115 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.402817 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.520425 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njtl\" (UniqueName: \"kubernetes.io/projected/49d23d39-bb62-4d74-9053-c98f530d7208-kube-api-access-9njtl\") pod \"49d23d39-bb62-4d74-9053-c98f530d7208\" (UID: \"49d23d39-bb62-4d74-9053-c98f530d7208\") " Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.527331 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d23d39-bb62-4d74-9053-c98f530d7208-kube-api-access-9njtl" (OuterVolumeSpecName: "kube-api-access-9njtl") pod "49d23d39-bb62-4d74-9053-c98f530d7208" (UID: "49d23d39-bb62-4d74-9053-c98f530d7208"). InnerVolumeSpecName "kube-api-access-9njtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.593533 4606 generic.go:334] "Generic (PLEG): container finished" podID="49d23d39-bb62-4d74-9053-c98f530d7208" containerID="31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204" exitCode=2 Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.593567 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49d23d39-bb62-4d74-9053-c98f530d7208","Type":"ContainerDied","Data":"31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204"} Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.593622 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49d23d39-bb62-4d74-9053-c98f530d7208","Type":"ContainerDied","Data":"2c91bd93b4144148fdf2a77435349fecfb89e0c1078803bdac620f88688f6b1f"} Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.593643 4606 scope.go:117] "RemoveContainer" containerID="31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.593574 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.623191 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9njtl\" (UniqueName: \"kubernetes.io/projected/49d23d39-bb62-4d74-9053-c98f530d7208-kube-api-access-9njtl\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.623805 4606 scope.go:117] "RemoveContainer" containerID="31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204" Jan 23 17:12:31 crc kubenswrapper[4606]: E0123 17:12:31.625433 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204\": container with ID starting with 31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204 not found: ID does not exist" containerID="31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.625495 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204"} err="failed to get container status \"31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204\": rpc error: code = NotFound desc = could not find container \"31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204\": container with ID starting with 31f3abbc2c2193a69e7124013af038e0056497537d4bb6b7a3178ba1370de204 not found: ID does not exist" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.635972 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.653493 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.667648 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:12:31 crc kubenswrapper[4606]: E0123 17:12:31.668180 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d23d39-bb62-4d74-9053-c98f530d7208" containerName="kube-state-metrics" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.668202 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d23d39-bb62-4d74-9053-c98f530d7208" containerName="kube-state-metrics" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.668427 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d23d39-bb62-4d74-9053-c98f530d7208" containerName="kube-state-metrics" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.669288 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.673517 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.673861 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.694607 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.847783 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.848538 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69l94\" (UniqueName: \"kubernetes.io/projected/2124b3c9-bd82-4e02-980c-b3de98193833-kube-api-access-69l94\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.848827 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.849038 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.950440 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.950502 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.950598 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.950636 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69l94\" (UniqueName: \"kubernetes.io/projected/2124b3c9-bd82-4e02-980c-b3de98193833-kube-api-access-69l94\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.960183 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.969550 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.975541 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/2124b3c9-bd82-4e02-980c-b3de98193833-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.976367 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69l94\" (UniqueName: \"kubernetes.io/projected/2124b3c9-bd82-4e02-980c-b3de98193833-kube-api-access-69l94\") pod \"kube-state-metrics-0\" (UID: \"2124b3c9-bd82-4e02-980c-b3de98193833\") " pod="openstack/kube-state-metrics-0" Jan 23 17:12:31 crc kubenswrapper[4606]: I0123 17:12:31.988204 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 17:12:32 crc kubenswrapper[4606]: I0123 17:12:32.375288 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d23d39-bb62-4d74-9053-c98f530d7208" path="/var/lib/kubelet/pods/49d23d39-bb62-4d74-9053-c98f530d7208/volumes" Jan 23 17:12:32 crc kubenswrapper[4606]: I0123 17:12:32.420557 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 17:12:32 crc kubenswrapper[4606]: I0123 17:12:32.603295 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2124b3c9-bd82-4e02-980c-b3de98193833","Type":"ContainerStarted","Data":"3b945f22ffd072a205afb8cf109a1971148b2eeb6fd8fa647ee0f4c070d28e7d"} Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.043073 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.043659 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-central-agent" containerID="cri-o://728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888" gracePeriod=30 Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.044150 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="proxy-httpd" containerID="cri-o://c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b" gracePeriod=30 Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.044208 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="sg-core" containerID="cri-o://63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f" gracePeriod=30 Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.044252 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-notification-agent" containerID="cri-o://da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70" gracePeriod=30 Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.615993 4606 generic.go:334] "Generic (PLEG): container finished" podID="40043888-693b-4833-9dad-f05ece6be64e" containerID="c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b" exitCode=0 Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.616024 4606 generic.go:334] "Generic (PLEG): container finished" podID="40043888-693b-4833-9dad-f05ece6be64e" containerID="63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f" exitCode=2 Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.616031 4606 generic.go:334] "Generic (PLEG): container finished" podID="40043888-693b-4833-9dad-f05ece6be64e" containerID="728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888" exitCode=0 Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.616091 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerDied","Data":"c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b"} Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.616116 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerDied","Data":"63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f"} Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.616129 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerDied","Data":"728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888"} Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.617731 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2124b3c9-bd82-4e02-980c-b3de98193833","Type":"ContainerStarted","Data":"a56f01538f7c230142f1b2d0281f566837d9d745c734eddd40bf1139b6b918a0"} Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.618814 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.633718 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.235873151 podStartE2EDuration="2.63369582s" podCreationTimestamp="2026-01-23 17:12:31 +0000 UTC" firstStartedPulling="2026-01-23 17:12:32.430838447 +0000 UTC m=+1186.431030853" lastFinishedPulling="2026-01-23 17:12:32.828661116 +0000 UTC m=+1186.828853522" observedRunningTime="2026-01-23 17:12:33.632663386 +0000 UTC m=+1187.632855802" watchObservedRunningTime="2026-01-23 17:12:33.63369582 +0000 UTC m=+1187.633888226" Jan 23 17:12:33 crc kubenswrapper[4606]: I0123 17:12:33.962348 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.381697 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534012 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-combined-ca-bundle\") pod \"40043888-693b-4833-9dad-f05ece6be64e\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534081 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-scripts\") pod \"40043888-693b-4833-9dad-f05ece6be64e\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534109 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-log-httpd\") pod \"40043888-693b-4833-9dad-f05ece6be64e\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534249 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-sg-core-conf-yaml\") pod \"40043888-693b-4833-9dad-f05ece6be64e\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534326 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-run-httpd\") pod \"40043888-693b-4833-9dad-f05ece6be64e\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534358 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-config-data\") pod \"40043888-693b-4833-9dad-f05ece6be64e\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534404 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stkqx\" (UniqueName: \"kubernetes.io/projected/40043888-693b-4833-9dad-f05ece6be64e-kube-api-access-stkqx\") pod \"40043888-693b-4833-9dad-f05ece6be64e\" (UID: \"40043888-693b-4833-9dad-f05ece6be64e\") " Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.534974 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40043888-693b-4833-9dad-f05ece6be64e" (UID: "40043888-693b-4833-9dad-f05ece6be64e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.535158 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40043888-693b-4833-9dad-f05ece6be64e" (UID: "40043888-693b-4833-9dad-f05ece6be64e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.539668 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40043888-693b-4833-9dad-f05ece6be64e-kube-api-access-stkqx" (OuterVolumeSpecName: "kube-api-access-stkqx") pod "40043888-693b-4833-9dad-f05ece6be64e" (UID: "40043888-693b-4833-9dad-f05ece6be64e"). InnerVolumeSpecName "kube-api-access-stkqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.551971 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-scripts" (OuterVolumeSpecName: "scripts") pod "40043888-693b-4833-9dad-f05ece6be64e" (UID: "40043888-693b-4833-9dad-f05ece6be64e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.569565 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "40043888-693b-4833-9dad-f05ece6be64e" (UID: "40043888-693b-4833-9dad-f05ece6be64e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.636330 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.636359 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.636368 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stkqx\" (UniqueName: \"kubernetes.io/projected/40043888-693b-4833-9dad-f05ece6be64e-kube-api-access-stkqx\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.636378 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.636385 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40043888-693b-4833-9dad-f05ece6be64e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.643474 4606 generic.go:334] "Generic (PLEG): container finished" podID="40043888-693b-4833-9dad-f05ece6be64e" containerID="da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70" exitCode=0 Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.643558 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.643555 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerDied","Data":"da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70"} Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.643625 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40043888-693b-4833-9dad-f05ece6be64e","Type":"ContainerDied","Data":"ab343f77667faf0827559801643c7037e1e2c3214ab4fb577a24a9cb0fd13632"} Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.643652 4606 scope.go:117] "RemoveContainer" containerID="c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.646453 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40043888-693b-4833-9dad-f05ece6be64e" (UID: "40043888-693b-4833-9dad-f05ece6be64e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.674069 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-config-data" (OuterVolumeSpecName: "config-data") pod "40043888-693b-4833-9dad-f05ece6be64e" (UID: "40043888-693b-4833-9dad-f05ece6be64e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.674382 4606 scope.go:117] "RemoveContainer" containerID="63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.692080 4606 scope.go:117] "RemoveContainer" containerID="da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.715693 4606 scope.go:117] "RemoveContainer" containerID="728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.739647 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.739680 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40043888-693b-4833-9dad-f05ece6be64e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.746013 4606 scope.go:117] "RemoveContainer" containerID="c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b" Jan 23 17:12:35 crc kubenswrapper[4606]: E0123 17:12:35.746501 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b\": container with ID starting with c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b not found: ID does not exist" containerID="c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.746528 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b"} err="failed to get container status \"c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b\": rpc error: code = NotFound desc = could not find container \"c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b\": container with ID starting with c0771ea616e41aea8c3b76e26845bb3cdff925d1ab3febbd42ed4b7087eb583b not found: ID does not exist" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.746546 4606 scope.go:117] "RemoveContainer" containerID="63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f" Jan 23 17:12:35 crc kubenswrapper[4606]: E0123 17:12:35.746796 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f\": container with ID starting with 63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f not found: ID does not exist" containerID="63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.746821 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f"} err="failed to get container status \"63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f\": rpc error: code = NotFound desc = could not find container \"63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f\": container with ID starting with 63e93828671727c5daa0d97e2bc90426185b4304b5e919661054b05fa5e9766f not found: ID does not exist" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.746833 4606 scope.go:117] "RemoveContainer" containerID="da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70" Jan 23 17:12:35 crc kubenswrapper[4606]: E0123 17:12:35.747013 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70\": container with ID starting with da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70 not found: ID does not exist" containerID="da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.747032 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70"} err="failed to get container status \"da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70\": rpc error: code = NotFound desc = could not find container \"da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70\": container with ID starting with da43865ba3228a2447c1b80fb4e3fa6ec17472a2b9391290eb643f8b89157d70 not found: ID does not exist" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.747045 4606 scope.go:117] "RemoveContainer" containerID="728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888" Jan 23 17:12:35 crc kubenswrapper[4606]: E0123 17:12:35.747243 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888\": container with ID starting with 728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888 not found: ID does not exist" containerID="728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.747263 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888"} err="failed to get container status \"728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888\": rpc error: code = NotFound desc = could not find container \"728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888\": container with ID starting with 728ac3c9b9e0f316af0c98392ce77a53d0d9da6f57ee21a0a0a0ec7a17dc4888 not found: ID does not exist" Jan 23 17:12:35 crc kubenswrapper[4606]: I0123 17:12:35.963069 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.002115 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.016445 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.041412 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.053491 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:36 crc kubenswrapper[4606]: E0123 17:12:36.054145 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="sg-core" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054167 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="sg-core" Jan 23 17:12:36 crc kubenswrapper[4606]: E0123 17:12:36.054209 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-central-agent" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054219 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-central-agent" Jan 23 17:12:36 crc kubenswrapper[4606]: E0123 17:12:36.054232 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="proxy-httpd" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054240 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="proxy-httpd" Jan 23 17:12:36 crc kubenswrapper[4606]: E0123 17:12:36.054285 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-notification-agent" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054295 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-notification-agent" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054546 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-central-agent" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054575 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="sg-core" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054625 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="proxy-httpd" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.054644 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="40043888-693b-4833-9dad-f05ece6be64e" containerName="ceilometer-notification-agent" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.056945 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.061107 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.062204 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.063472 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.066092 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.248464 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.248514 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-config-data\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.248555 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-scripts\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.248605 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.249001 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.249157 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65pqp\" (UniqueName: \"kubernetes.io/projected/7ce44a83-6852-464a-bae6-c27135f55567-kube-api-access-65pqp\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.249221 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.249369 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350003 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-scripts\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350042 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350072 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350113 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65pqp\" (UniqueName: \"kubernetes.io/projected/7ce44a83-6852-464a-bae6-c27135f55567-kube-api-access-65pqp\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350150 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350195 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350258 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350274 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-config-data\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350678 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.350435 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.355339 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-scripts\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.355481 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.367024 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.367920 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-config-data\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.368480 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.368748 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65pqp\" (UniqueName: \"kubernetes.io/projected/7ce44a83-6852-464a-bae6-c27135f55567-kube-api-access-65pqp\") pod \"ceilometer-0\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.375532 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.382790 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40043888-693b-4833-9dad-f05ece6be64e" path="/var/lib/kubelet/pods/40043888-693b-4833-9dad-f05ece6be64e/volumes" Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.687236 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 17:12:36 crc kubenswrapper[4606]: W0123 17:12:36.834950 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ce44a83_6852_464a_bae6_c27135f55567.slice/crio-fac393a951b4472b0ec64155e1387ec8ff4f7f0c0918fc77bb03bb7aff849dd0 WatchSource:0}: Error finding container fac393a951b4472b0ec64155e1387ec8ff4f7f0c0918fc77bb03bb7aff849dd0: Status 404 returned error can't find the container with id fac393a951b4472b0ec64155e1387ec8ff4f7f0c0918fc77bb03bb7aff849dd0 Jan 23 17:12:36 crc kubenswrapper[4606]: I0123 17:12:36.839333 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:37 crc kubenswrapper[4606]: I0123 17:12:37.679090 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerStarted","Data":"5392d77b9cfa4f931234cb0dc7e46b301c8605a6a99b0c45d82fd5bc4ac62c26"} Jan 23 17:12:37 crc kubenswrapper[4606]: I0123 17:12:37.679127 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerStarted","Data":"fac393a951b4472b0ec64155e1387ec8ff4f7f0c0918fc77bb03bb7aff849dd0"} Jan 23 17:12:37 crc kubenswrapper[4606]: I0123 17:12:37.982659 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 17:12:37 crc kubenswrapper[4606]: I0123 17:12:37.982726 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 17:12:38 crc kubenswrapper[4606]: I0123 17:12:38.688501 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerStarted","Data":"aa1fc3ede7d53584cc90f65f88f5464a315c4ef6995600a0d7000e09d4621b4d"} Jan 23 17:12:39 crc kubenswrapper[4606]: I0123 17:12:39.065796 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:12:39 crc kubenswrapper[4606]: I0123 17:12:39.066050 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:12:39 crc kubenswrapper[4606]: I0123 17:12:39.699096 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerStarted","Data":"c1d10a049184e19d6e1ad18e1455cf4e36198538a6580a62cf0a7619ac7f8bc2"} Jan 23 17:12:40 crc kubenswrapper[4606]: I0123 17:12:40.710068 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerStarted","Data":"9abdd2f93d643830e76bc6db0c39d46d31b5a4c6a5740d643c97efd6caf83368"} Jan 23 17:12:40 crc kubenswrapper[4606]: I0123 17:12:40.710748 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:12:40 crc kubenswrapper[4606]: I0123 17:12:40.741318 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3795087710000002 podStartE2EDuration="5.741294083s" podCreationTimestamp="2026-01-23 17:12:35 +0000 UTC" firstStartedPulling="2026-01-23 17:12:36.838130389 +0000 UTC m=+1190.838322795" lastFinishedPulling="2026-01-23 17:12:40.199915701 +0000 UTC m=+1194.200108107" observedRunningTime="2026-01-23 17:12:40.736222121 +0000 UTC m=+1194.736414537" watchObservedRunningTime="2026-01-23 17:12:40.741294083 +0000 UTC m=+1194.741486489" Jan 23 17:12:42 crc kubenswrapper[4606]: I0123 17:12:42.106696 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 17:12:43 crc kubenswrapper[4606]: I0123 17:12:43.741959 4606 generic.go:334] "Generic (PLEG): container finished" podID="fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" containerID="a458d8b582e950671fbfa8a4d5b629a0b68b6976404bbf4aa757a37a5c31eaea" exitCode=137 Jan 23 17:12:43 crc kubenswrapper[4606]: I0123 17:12:43.742546 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8","Type":"ContainerDied","Data":"a458d8b582e950671fbfa8a4d5b629a0b68b6976404bbf4aa757a37a5c31eaea"} Jan 23 17:12:43 crc kubenswrapper[4606]: I0123 17:12:43.744343 4606 generic.go:334] "Generic (PLEG): container finished" podID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerID="54c6c49f29dceda7652d45f5097889cd07b5b19d2f7ba1d24b601f28876adc7a" exitCode=137 Jan 23 17:12:43 crc kubenswrapper[4606]: I0123 17:12:43.744368 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9","Type":"ContainerDied","Data":"54c6c49f29dceda7652d45f5097889cd07b5b19d2f7ba1d24b601f28876adc7a"} Jan 23 17:12:43 crc kubenswrapper[4606]: I0123 17:12:43.826698 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:12:43 crc kubenswrapper[4606]: I0123 17:12:43.833550 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006404 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-config-data\") pod \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006504 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-logs\") pod \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006546 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsm9b\" (UniqueName: \"kubernetes.io/projected/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-kube-api-access-rsm9b\") pod \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006611 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-combined-ca-bundle\") pod \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006650 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-combined-ca-bundle\") pod \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006727 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz6lk\" (UniqueName: \"kubernetes.io/projected/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-kube-api-access-pz6lk\") pod \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\" (UID: \"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8\") " Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006748 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-config-data\") pod \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\" (UID: \"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9\") " Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.006780 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-logs" (OuterVolumeSpecName: "logs") pod "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" (UID: "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.007180 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.011716 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-kube-api-access-pz6lk" (OuterVolumeSpecName: "kube-api-access-pz6lk") pod "fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" (UID: "fa1b3621-0a0c-4e84-8364-abe06b2bc4c8"). InnerVolumeSpecName "kube-api-access-pz6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.018747 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-kube-api-access-rsm9b" (OuterVolumeSpecName: "kube-api-access-rsm9b") pod "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" (UID: "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9"). InnerVolumeSpecName "kube-api-access-rsm9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.033631 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-config-data" (OuterVolumeSpecName: "config-data") pod "fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" (UID: "fa1b3621-0a0c-4e84-8364-abe06b2bc4c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.039791 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-config-data" (OuterVolumeSpecName: "config-data") pod "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" (UID: "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.041861 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" (UID: "fa1b3621-0a0c-4e84-8364-abe06b2bc4c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.054867 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" (UID: "ab482f4b-c0ec-4324-b7cf-1ad866ab83e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.108845 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.108888 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz6lk\" (UniqueName: \"kubernetes.io/projected/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-kube-api-access-pz6lk\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.108902 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.108914 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.108926 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsm9b\" (UniqueName: \"kubernetes.io/projected/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9-kube-api-access-rsm9b\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.108938 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.756060 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa1b3621-0a0c-4e84-8364-abe06b2bc4c8","Type":"ContainerDied","Data":"18969011dd8115f1d2090d9cf4db81f4de425c4989c1c9690cc29512cff25d06"} Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.756454 4606 scope.go:117] "RemoveContainer" containerID="a458d8b582e950671fbfa8a4d5b629a0b68b6976404bbf4aa757a37a5c31eaea" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.756494 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.759308 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab482f4b-c0ec-4324-b7cf-1ad866ab83e9","Type":"ContainerDied","Data":"b4d2b1cf0d4e3560c67ac4f3cb24656d20bd871a9f7768b0cd2478aa7fb357cb"} Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.759418 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.786860 4606 scope.go:117] "RemoveContainer" containerID="54c6c49f29dceda7652d45f5097889cd07b5b19d2f7ba1d24b601f28876adc7a" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.790505 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.802274 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.821947 4606 scope.go:117] "RemoveContainer" containerID="23282f0a51e2fd2d447d8b236935dbc36e298fcac4d384dea1370f3ae3629bbd" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.826436 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.842329 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.860211 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:44 crc kubenswrapper[4606]: E0123 17:12:44.863447 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.863473 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 17:12:44 crc kubenswrapper[4606]: E0123 17:12:44.863500 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-log" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.863506 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-log" Jan 23 17:12:44 crc kubenswrapper[4606]: E0123 17:12:44.863538 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-metadata" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.863547 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-metadata" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.863813 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-metadata" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.863834 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" containerName="nova-metadata-log" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.863845 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.864769 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.865506 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.868690 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.871915 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.876570 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.877735 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.882676 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.882978 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.882873 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 17:12:44 crc kubenswrapper[4606]: I0123 17:12:44.884467 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.026223 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.026581 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.026752 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.026817 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-config-data\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.027008 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.027043 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.027074 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjfgw\" (UniqueName: \"kubernetes.io/projected/5a532327-312c-4924-894a-dc9058bb09f8-kube-api-access-gjfgw\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.027115 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9h9\" (UniqueName: \"kubernetes.io/projected/8920b95e-6d88-4017-bb76-79677d2bc880-kube-api-access-vj9h9\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.027186 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.027350 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a532327-312c-4924-894a-dc9058bb09f8-logs\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.129702 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.129783 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.129829 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.129853 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-config-data\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.129930 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.129971 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.130004 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjfgw\" (UniqueName: \"kubernetes.io/projected/5a532327-312c-4924-894a-dc9058bb09f8-kube-api-access-gjfgw\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.130032 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9h9\" (UniqueName: \"kubernetes.io/projected/8920b95e-6d88-4017-bb76-79677d2bc880-kube-api-access-vj9h9\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.130087 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.130191 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a532327-312c-4924-894a-dc9058bb09f8-logs\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.131745 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a532327-312c-4924-894a-dc9058bb09f8-logs\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.136746 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.140679 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.141668 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.148479 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-config-data\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.149013 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.153109 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.153264 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8920b95e-6d88-4017-bb76-79677d2bc880-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.158075 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9h9\" (UniqueName: \"kubernetes.io/projected/8920b95e-6d88-4017-bb76-79677d2bc880-kube-api-access-vj9h9\") pod \"nova-cell1-novncproxy-0\" (UID: \"8920b95e-6d88-4017-bb76-79677d2bc880\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.159393 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjfgw\" (UniqueName: \"kubernetes.io/projected/5a532327-312c-4924-894a-dc9058bb09f8-kube-api-access-gjfgw\") pod \"nova-metadata-0\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.187218 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.207109 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.681152 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.729338 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 17:12:45 crc kubenswrapper[4606]: W0123 17:12:45.739654 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8920b95e_6d88_4017_bb76_79677d2bc880.slice/crio-ef91101ad173d7c0a72a8de78729fc2c33fffe37543302cb2ac8c47d81dc2028 WatchSource:0}: Error finding container ef91101ad173d7c0a72a8de78729fc2c33fffe37543302cb2ac8c47d81dc2028: Status 404 returned error can't find the container with id ef91101ad173d7c0a72a8de78729fc2c33fffe37543302cb2ac8c47d81dc2028 Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.778502 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8920b95e-6d88-4017-bb76-79677d2bc880","Type":"ContainerStarted","Data":"ef91101ad173d7c0a72a8de78729fc2c33fffe37543302cb2ac8c47d81dc2028"} Jan 23 17:12:45 crc kubenswrapper[4606]: I0123 17:12:45.784263 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a532327-312c-4924-894a-dc9058bb09f8","Type":"ContainerStarted","Data":"b864b7d9263796390dc6b43ef734029f67fba3decb538d5615bce24a8db79f5e"} Jan 23 17:12:46 crc kubenswrapper[4606]: I0123 17:12:46.391865 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab482f4b-c0ec-4324-b7cf-1ad866ab83e9" path="/var/lib/kubelet/pods/ab482f4b-c0ec-4324-b7cf-1ad866ab83e9/volumes" Jan 23 17:12:46 crc kubenswrapper[4606]: I0123 17:12:46.392944 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa1b3621-0a0c-4e84-8364-abe06b2bc4c8" path="/var/lib/kubelet/pods/fa1b3621-0a0c-4e84-8364-abe06b2bc4c8/volumes" Jan 23 17:12:46 crc kubenswrapper[4606]: I0123 17:12:46.806257 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8920b95e-6d88-4017-bb76-79677d2bc880","Type":"ContainerStarted","Data":"08abba9e6886f392402804d610408c589fa921d5e3f489f30f348018d79430a8"} Jan 23 17:12:46 crc kubenswrapper[4606]: I0123 17:12:46.809855 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a532327-312c-4924-894a-dc9058bb09f8","Type":"ContainerStarted","Data":"7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6"} Jan 23 17:12:46 crc kubenswrapper[4606]: I0123 17:12:46.809929 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a532327-312c-4924-894a-dc9058bb09f8","Type":"ContainerStarted","Data":"b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d"} Jan 23 17:12:46 crc kubenswrapper[4606]: I0123 17:12:46.831457 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.831432027 podStartE2EDuration="2.831432027s" podCreationTimestamp="2026-01-23 17:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:46.827101273 +0000 UTC m=+1200.827293679" watchObservedRunningTime="2026-01-23 17:12:46.831432027 +0000 UTC m=+1200.831624433" Jan 23 17:12:46 crc kubenswrapper[4606]: I0123 17:12:46.856547 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.856528129 podStartE2EDuration="2.856528129s" podCreationTimestamp="2026-01-23 17:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:46.85281212 +0000 UTC m=+1200.853004526" watchObservedRunningTime="2026-01-23 17:12:46.856528129 +0000 UTC m=+1200.856720535" Jan 23 17:12:47 crc kubenswrapper[4606]: I0123 17:12:47.986120 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 17:12:47 crc kubenswrapper[4606]: I0123 17:12:47.986741 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 17:12:47 crc kubenswrapper[4606]: I0123 17:12:47.987635 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 17:12:47 crc kubenswrapper[4606]: I0123 17:12:47.996940 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 17:12:48 crc kubenswrapper[4606]: I0123 17:12:48.830456 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 17:12:48 crc kubenswrapper[4606]: I0123 17:12:48.834774 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.028112 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tx5pf"] Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.029565 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.053978 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tx5pf"] Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.113098 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.113160 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-config\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.113186 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.113221 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.113318 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.113342 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h294q\" (UniqueName: \"kubernetes.io/projected/9088f411-fc76-423e-861f-9a6f26c44b25-kube-api-access-h294q\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.215006 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.215055 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h294q\" (UniqueName: \"kubernetes.io/projected/9088f411-fc76-423e-861f-9a6f26c44b25-kube-api-access-h294q\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.215084 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.215119 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-config\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.215140 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.215170 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.215987 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.216742 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.217220 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-config\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.217723 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.218204 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.252728 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h294q\" (UniqueName: \"kubernetes.io/projected/9088f411-fc76-423e-861f-9a6f26c44b25-kube-api-access-h294q\") pod \"dnsmasq-dns-89c5cd4d5-tx5pf\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.351955 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:49 crc kubenswrapper[4606]: W0123 17:12:49.921299 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9088f411_fc76_423e_861f_9a6f26c44b25.slice/crio-2751c39eae4b67e93bcc4b96cbc3047838ed56a022da221b0c8a57f8c10248b6 WatchSource:0}: Error finding container 2751c39eae4b67e93bcc4b96cbc3047838ed56a022da221b0c8a57f8c10248b6: Status 404 returned error can't find the container with id 2751c39eae4b67e93bcc4b96cbc3047838ed56a022da221b0c8a57f8c10248b6 Jan 23 17:12:49 crc kubenswrapper[4606]: I0123 17:12:49.937713 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tx5pf"] Jan 23 17:12:50 crc kubenswrapper[4606]: I0123 17:12:50.188319 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 17:12:50 crc kubenswrapper[4606]: I0123 17:12:50.188371 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 17:12:50 crc kubenswrapper[4606]: I0123 17:12:50.208274 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:50 crc kubenswrapper[4606]: I0123 17:12:50.869569 4606 generic.go:334] "Generic (PLEG): container finished" podID="9088f411-fc76-423e-861f-9a6f26c44b25" containerID="db4a98296702ec980b202b54b623154dd0230d7883838250d8118498f9eaa2b4" exitCode=0 Jan 23 17:12:50 crc kubenswrapper[4606]: I0123 17:12:50.871565 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" event={"ID":"9088f411-fc76-423e-861f-9a6f26c44b25","Type":"ContainerDied","Data":"db4a98296702ec980b202b54b623154dd0230d7883838250d8118498f9eaa2b4"} Jan 23 17:12:50 crc kubenswrapper[4606]: I0123 17:12:50.871666 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" event={"ID":"9088f411-fc76-423e-861f-9a6f26c44b25","Type":"ContainerStarted","Data":"2751c39eae4b67e93bcc4b96cbc3047838ed56a022da221b0c8a57f8c10248b6"} Jan 23 17:12:51 crc kubenswrapper[4606]: I0123 17:12:51.420986 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:51 crc kubenswrapper[4606]: I0123 17:12:51.882940 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-log" containerID="cri-o://5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c" gracePeriod=30 Jan 23 17:12:51 crc kubenswrapper[4606]: I0123 17:12:51.884198 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" event={"ID":"9088f411-fc76-423e-861f-9a6f26c44b25","Type":"ContainerStarted","Data":"8fbb7b6aa4a1ea6772ee078b3ace3df06610acb82e2f87756376e904a93cd8ce"} Jan 23 17:12:51 crc kubenswrapper[4606]: I0123 17:12:51.884242 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:51 crc kubenswrapper[4606]: I0123 17:12:51.884642 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-api" containerID="cri-o://e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391" gracePeriod=30 Jan 23 17:12:51 crc kubenswrapper[4606]: I0123 17:12:51.907379 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" podStartSLOduration=3.907356602 podStartE2EDuration="3.907356602s" podCreationTimestamp="2026-01-23 17:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:51.905730343 +0000 UTC m=+1205.905922769" watchObservedRunningTime="2026-01-23 17:12:51.907356602 +0000 UTC m=+1205.907549018" Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.904550 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.904834 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-central-agent" containerID="cri-o://5392d77b9cfa4f931234cb0dc7e46b301c8605a6a99b0c45d82fd5bc4ac62c26" gracePeriod=30 Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.906809 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-notification-agent" containerID="cri-o://aa1fc3ede7d53584cc90f65f88f5464a315c4ef6995600a0d7000e09d4621b4d" gracePeriod=30 Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.907958 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="proxy-httpd" containerID="cri-o://9abdd2f93d643830e76bc6db0c39d46d31b5a4c6a5740d643c97efd6caf83368" gracePeriod=30 Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.908030 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="sg-core" containerID="cri-o://c1d10a049184e19d6e1ad18e1455cf4e36198538a6580a62cf0a7619ac7f8bc2" gracePeriod=30 Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.931392 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": EOF" Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.940696 4606 generic.go:334] "Generic (PLEG): container finished" podID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerID="5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c" exitCode=143 Jan 23 17:12:52 crc kubenswrapper[4606]: I0123 17:12:52.941077 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27bd06df-ec76-4e9d-92bc-fa3355de75fb","Type":"ContainerDied","Data":"5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c"} Jan 23 17:12:53 crc kubenswrapper[4606]: I0123 17:12:53.954179 4606 generic.go:334] "Generic (PLEG): container finished" podID="7ce44a83-6852-464a-bae6-c27135f55567" containerID="9abdd2f93d643830e76bc6db0c39d46d31b5a4c6a5740d643c97efd6caf83368" exitCode=0 Jan 23 17:12:53 crc kubenswrapper[4606]: I0123 17:12:53.954490 4606 generic.go:334] "Generic (PLEG): container finished" podID="7ce44a83-6852-464a-bae6-c27135f55567" containerID="c1d10a049184e19d6e1ad18e1455cf4e36198538a6580a62cf0a7619ac7f8bc2" exitCode=2 Jan 23 17:12:53 crc kubenswrapper[4606]: I0123 17:12:53.954233 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerDied","Data":"9abdd2f93d643830e76bc6db0c39d46d31b5a4c6a5740d643c97efd6caf83368"} Jan 23 17:12:53 crc kubenswrapper[4606]: I0123 17:12:53.954569 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerDied","Data":"c1d10a049184e19d6e1ad18e1455cf4e36198538a6580a62cf0a7619ac7f8bc2"} Jan 23 17:12:53 crc kubenswrapper[4606]: I0123 17:12:53.954617 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerDied","Data":"5392d77b9cfa4f931234cb0dc7e46b301c8605a6a99b0c45d82fd5bc4ac62c26"} Jan 23 17:12:53 crc kubenswrapper[4606]: I0123 17:12:53.954502 4606 generic.go:334] "Generic (PLEG): container finished" podID="7ce44a83-6852-464a-bae6-c27135f55567" containerID="5392d77b9cfa4f931234cb0dc7e46b301c8605a6a99b0c45d82fd5bc4ac62c26" exitCode=0 Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.188629 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.188945 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.209342 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.232699 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.515384 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.645035 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-combined-ca-bundle\") pod \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.645143 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h9qg\" (UniqueName: \"kubernetes.io/projected/27bd06df-ec76-4e9d-92bc-fa3355de75fb-kube-api-access-6h9qg\") pod \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.645223 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27bd06df-ec76-4e9d-92bc-fa3355de75fb-logs\") pod \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.645255 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-config-data\") pod \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\" (UID: \"27bd06df-ec76-4e9d-92bc-fa3355de75fb\") " Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.646525 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27bd06df-ec76-4e9d-92bc-fa3355de75fb-logs" (OuterVolumeSpecName: "logs") pod "27bd06df-ec76-4e9d-92bc-fa3355de75fb" (UID: "27bd06df-ec76-4e9d-92bc-fa3355de75fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.657123 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27bd06df-ec76-4e9d-92bc-fa3355de75fb-kube-api-access-6h9qg" (OuterVolumeSpecName: "kube-api-access-6h9qg") pod "27bd06df-ec76-4e9d-92bc-fa3355de75fb" (UID: "27bd06df-ec76-4e9d-92bc-fa3355de75fb"). InnerVolumeSpecName "kube-api-access-6h9qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.712745 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-config-data" (OuterVolumeSpecName: "config-data") pod "27bd06df-ec76-4e9d-92bc-fa3355de75fb" (UID: "27bd06df-ec76-4e9d-92bc-fa3355de75fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.744732 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27bd06df-ec76-4e9d-92bc-fa3355de75fb" (UID: "27bd06df-ec76-4e9d-92bc-fa3355de75fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.747646 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27bd06df-ec76-4e9d-92bc-fa3355de75fb-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.747676 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.747686 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bd06df-ec76-4e9d-92bc-fa3355de75fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.747696 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h9qg\" (UniqueName: \"kubernetes.io/projected/27bd06df-ec76-4e9d-92bc-fa3355de75fb-kube-api-access-6h9qg\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.992543 4606 generic.go:334] "Generic (PLEG): container finished" podID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerID="e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391" exitCode=0 Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.992615 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.992620 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27bd06df-ec76-4e9d-92bc-fa3355de75fb","Type":"ContainerDied","Data":"e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391"} Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.993820 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27bd06df-ec76-4e9d-92bc-fa3355de75fb","Type":"ContainerDied","Data":"cab5081bf1c32972a5752493a8f886dfe2d00f21f121ccd2ff2cda2382fe0457"} Jan 23 17:12:55 crc kubenswrapper[4606]: I0123 17:12:55.993842 4606 scope.go:117] "RemoveContainer" containerID="e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.017272 4606 generic.go:334] "Generic (PLEG): container finished" podID="7ce44a83-6852-464a-bae6-c27135f55567" containerID="aa1fc3ede7d53584cc90f65f88f5464a315c4ef6995600a0d7000e09d4621b4d" exitCode=0 Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.017510 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerDied","Data":"aa1fc3ede7d53584cc90f65f88f5464a315c4ef6995600a0d7000e09d4621b4d"} Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.050579 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.071855 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.089445 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.091368 4606 scope.go:117] "RemoveContainer" containerID="5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.099302 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:56 crc kubenswrapper[4606]: E0123 17:12:56.099766 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-log" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.099781 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-log" Jan 23 17:12:56 crc kubenswrapper[4606]: E0123 17:12:56.099799 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-api" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.099806 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-api" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.099971 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-api" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.099994 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" containerName="nova-api-log" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.101087 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.108530 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.108726 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.108862 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.126485 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.196558 4606 scope.go:117] "RemoveContainer" containerID="e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391" Jan 23 17:12:56 crc kubenswrapper[4606]: E0123 17:12:56.198855 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391\": container with ID starting with e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391 not found: ID does not exist" containerID="e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.198887 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391"} err="failed to get container status \"e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391\": rpc error: code = NotFound desc = could not find container \"e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391\": container with ID starting with e23d6ea0ebf2258aef2c6a47baaaea0c1e5fde668652b9543fd693ff88052391 not found: ID does not exist" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.198908 4606 scope.go:117] "RemoveContainer" containerID="5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c" Jan 23 17:12:56 crc kubenswrapper[4606]: E0123 17:12:56.199993 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c\": container with ID starting with 5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c not found: ID does not exist" containerID="5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.200015 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c"} err="failed to get container status \"5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c\": rpc error: code = NotFound desc = could not find container \"5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c\": container with ID starting with 5bf4d24eeeb4efd62cf91d2f37a2ec2adc9c7732da09a75535bca03b6851594c not found: ID does not exist" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.205435 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.205800 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.259831 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-public-tls-certs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.259878 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-config-data\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.259916 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjh9t\" (UniqueName: \"kubernetes.io/projected/34b53c46-2301-4f89-b428-aa523134e940-kube-api-access-zjh9t\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.259965 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.260008 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b53c46-2301-4f89-b428-aa523134e940-logs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.260045 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.361622 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-public-tls-certs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.361686 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-config-data\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.361736 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjh9t\" (UniqueName: \"kubernetes.io/projected/34b53c46-2301-4f89-b428-aa523134e940-kube-api-access-zjh9t\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.361800 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.361856 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b53c46-2301-4f89-b428-aa523134e940-logs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.361898 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.363161 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b53c46-2301-4f89-b428-aa523134e940-logs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.375972 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-public-tls-certs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.379981 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.400605 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-config-data\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.400896 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.408516 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27bd06df-ec76-4e9d-92bc-fa3355de75fb" path="/var/lib/kubelet/pods/27bd06df-ec76-4e9d-92bc-fa3355de75fb/volumes" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.426055 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjh9t\" (UniqueName: \"kubernetes.io/projected/34b53c46-2301-4f89-b428-aa523134e940-kube-api-access-zjh9t\") pod \"nova-api-0\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.479057 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8fb92"] Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.480211 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.483158 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.483291 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.488997 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.490539 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8fb92"] Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.492664 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.601349 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-scripts\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.610199 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-ceilometer-tls-certs\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.610244 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-config-data\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.610333 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-sg-core-conf-yaml\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.610358 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-run-httpd\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.610392 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65pqp\" (UniqueName: \"kubernetes.io/projected/7ce44a83-6852-464a-bae6-c27135f55567-kube-api-access-65pqp\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.610527 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-log-httpd\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.610559 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-combined-ca-bundle\") pod \"7ce44a83-6852-464a-bae6-c27135f55567\" (UID: \"7ce44a83-6852-464a-bae6-c27135f55567\") " Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.611847 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-scripts\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.611909 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-config-data\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.612084 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhrp\" (UniqueName: \"kubernetes.io/projected/00e344f2-398e-4610-b627-97a98e1de0d7-kube-api-access-gnhrp\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.612133 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.613392 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.620599 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-scripts" (OuterVolumeSpecName: "scripts") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.623046 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.624209 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce44a83-6852-464a-bae6-c27135f55567-kube-api-access-65pqp" (OuterVolumeSpecName: "kube-api-access-65pqp") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "kube-api-access-65pqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.698623 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715037 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhrp\" (UniqueName: \"kubernetes.io/projected/00e344f2-398e-4610-b627-97a98e1de0d7-kube-api-access-gnhrp\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715099 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715181 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-scripts\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715220 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-config-data\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715269 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715282 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715293 4606 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715306 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce44a83-6852-464a-bae6-c27135f55567-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.715318 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65pqp\" (UniqueName: \"kubernetes.io/projected/7ce44a83-6852-464a-bae6-c27135f55567-kube-api-access-65pqp\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.718555 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.722665 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-config-data\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.737741 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhrp\" (UniqueName: \"kubernetes.io/projected/00e344f2-398e-4610-b627-97a98e1de0d7-kube-api-access-gnhrp\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.744147 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-scripts\") pod \"nova-cell1-cell-mapping-8fb92\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.745784 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.747567 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.821075 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.821108 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.827210 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.885670 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-config-data" (OuterVolumeSpecName: "config-data") pod "7ce44a83-6852-464a-bae6-c27135f55567" (UID: "7ce44a83-6852-464a-bae6-c27135f55567"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:12:56 crc kubenswrapper[4606]: I0123 17:12:56.923018 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce44a83-6852-464a-bae6-c27135f55567-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.038677 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce44a83-6852-464a-bae6-c27135f55567","Type":"ContainerDied","Data":"fac393a951b4472b0ec64155e1387ec8ff4f7f0c0918fc77bb03bb7aff849dd0"} Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.038984 4606 scope.go:117] "RemoveContainer" containerID="9abdd2f93d643830e76bc6db0c39d46d31b5a4c6a5740d643c97efd6caf83368" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.039101 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.048342 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:12:57 crc kubenswrapper[4606]: W0123 17:12:57.085011 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b53c46_2301_4f89_b428_aa523134e940.slice/crio-9152fa37eced82927a631b7b32792765740132e17214d90404ff363d1bc25b21 WatchSource:0}: Error finding container 9152fa37eced82927a631b7b32792765740132e17214d90404ff363d1bc25b21: Status 404 returned error can't find the container with id 9152fa37eced82927a631b7b32792765740132e17214d90404ff363d1bc25b21 Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.087229 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.100915 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.124458 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:57 crc kubenswrapper[4606]: E0123 17:12:57.124841 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="proxy-httpd" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.124854 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="proxy-httpd" Jan 23 17:12:57 crc kubenswrapper[4606]: E0123 17:12:57.124869 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-notification-agent" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.124875 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-notification-agent" Jan 23 17:12:57 crc kubenswrapper[4606]: E0123 17:12:57.124888 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="sg-core" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.124895 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="sg-core" Jan 23 17:12:57 crc kubenswrapper[4606]: E0123 17:12:57.124906 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-central-agent" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.124912 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-central-agent" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.125077 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-central-agent" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.125090 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="ceilometer-notification-agent" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.125112 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="proxy-httpd" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.125119 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce44a83-6852-464a-bae6-c27135f55567" containerName="sg-core" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.154345 4606 scope.go:117] "RemoveContainer" containerID="c1d10a049184e19d6e1ad18e1455cf4e36198538a6580a62cf0a7619ac7f8bc2" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.182385 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.189213 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.189488 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.189621 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.245723 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.246034 4606 scope.go:117] "RemoveContainer" containerID="aa1fc3ede7d53584cc90f65f88f5464a315c4ef6995600a0d7000e09d4621b4d" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.272663 4606 scope.go:117] "RemoveContainer" containerID="5392d77b9cfa4f931234cb0dc7e46b301c8605a6a99b0c45d82fd5bc4ac62c26" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.347423 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8fb92"] Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.360999 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-config-data\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.361105 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-scripts\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.361173 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vbf9\" (UniqueName: \"kubernetes.io/projected/dd93c16a-9cd2-4011-a291-40070e3bfe74-kube-api-access-8vbf9\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.361251 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.361376 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.361480 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.361567 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-log-httpd\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.361620 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-run-httpd\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463222 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463655 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463753 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463794 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-log-httpd\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463829 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-run-httpd\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463922 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-config-data\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463957 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-scripts\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.463978 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vbf9\" (UniqueName: \"kubernetes.io/projected/dd93c16a-9cd2-4011-a291-40070e3bfe74-kube-api-access-8vbf9\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.470638 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-log-httpd\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.470898 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-run-httpd\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.472316 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.476288 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.481383 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-scripts\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.481985 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-config-data\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.484069 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.490448 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vbf9\" (UniqueName: \"kubernetes.io/projected/dd93c16a-9cd2-4011-a291-40070e3bfe74-kube-api-access-8vbf9\") pod \"ceilometer-0\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " pod="openstack/ceilometer-0" Jan 23 17:12:57 crc kubenswrapper[4606]: I0123 17:12:57.532157 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.041387 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.068460 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerStarted","Data":"dba45a6c442a0e73dec72fb39340fa6d85da19e66e17a007117d68588d5f26dd"} Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.071560 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8fb92" event={"ID":"00e344f2-398e-4610-b627-97a98e1de0d7","Type":"ContainerStarted","Data":"111d88bb0611c3f9bc820889266cfd327a010e6efd18e5b74deeccef116c060e"} Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.071636 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8fb92" event={"ID":"00e344f2-398e-4610-b627-97a98e1de0d7","Type":"ContainerStarted","Data":"bbaf935d11ef0711f940fe14a14257cf84892307b7361e97dfa4c8d22afd198e"} Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.074553 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34b53c46-2301-4f89-b428-aa523134e940","Type":"ContainerStarted","Data":"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028"} Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.074603 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34b53c46-2301-4f89-b428-aa523134e940","Type":"ContainerStarted","Data":"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33"} Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.074616 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34b53c46-2301-4f89-b428-aa523134e940","Type":"ContainerStarted","Data":"9152fa37eced82927a631b7b32792765740132e17214d90404ff363d1bc25b21"} Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.105870 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8fb92" podStartSLOduration=2.105846025 podStartE2EDuration="2.105846025s" podCreationTimestamp="2026-01-23 17:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:58.085887506 +0000 UTC m=+1212.086079922" watchObservedRunningTime="2026-01-23 17:12:58.105846025 +0000 UTC m=+1212.106038431" Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.127569 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.127542875 podStartE2EDuration="2.127542875s" podCreationTimestamp="2026-01-23 17:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:12:58.104457792 +0000 UTC m=+1212.104650218" watchObservedRunningTime="2026-01-23 17:12:58.127542875 +0000 UTC m=+1212.127735281" Jan 23 17:12:58 crc kubenswrapper[4606]: I0123 17:12:58.381702 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce44a83-6852-464a-bae6-c27135f55567" path="/var/lib/kubelet/pods/7ce44a83-6852-464a-bae6-c27135f55567/volumes" Jan 23 17:12:59 crc kubenswrapper[4606]: I0123 17:12:59.084713 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerStarted","Data":"2371365fe1d7ea6a8aa02a1387d8d67fbc8787bdd05c88cca89c4e3dbe5a0597"} Jan 23 17:12:59 crc kubenswrapper[4606]: I0123 17:12:59.354498 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:12:59 crc kubenswrapper[4606]: I0123 17:12:59.462383 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-l26v9"] Jan 23 17:12:59 crc kubenswrapper[4606]: I0123 17:12:59.462640 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" podUID="f2813a42-8848-431d-aee7-f6b572becd0f" containerName="dnsmasq-dns" containerID="cri-o://bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2" gracePeriod=10 Jan 23 17:12:59 crc kubenswrapper[4606]: I0123 17:12:59.942645 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:12:59 crc kubenswrapper[4606]: I0123 17:12:59.942899 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.067518 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.108020 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerStarted","Data":"dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc"} Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.111470 4606 generic.go:334] "Generic (PLEG): container finished" podID="f2813a42-8848-431d-aee7-f6b572becd0f" containerID="bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2" exitCode=0 Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.111522 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" event={"ID":"f2813a42-8848-431d-aee7-f6b572becd0f","Type":"ContainerDied","Data":"bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2"} Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.111550 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" event={"ID":"f2813a42-8848-431d-aee7-f6b572becd0f","Type":"ContainerDied","Data":"0833eaaf39332a56562ef1de5cbe092de3e5049ad14b5829973c2af80ecaea45"} Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.111570 4606 scope.go:117] "RemoveContainer" containerID="bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.111735 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-l26v9" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.144406 4606 scope.go:117] "RemoveContainer" containerID="68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.175103 4606 scope.go:117] "RemoveContainer" containerID="bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2" Jan 23 17:13:00 crc kubenswrapper[4606]: E0123 17:13:00.176749 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2\": container with ID starting with bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2 not found: ID does not exist" containerID="bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.176784 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2"} err="failed to get container status \"bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2\": rpc error: code = NotFound desc = could not find container \"bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2\": container with ID starting with bf82b756d48c2703138c002894cb494b70cc6350ee9e113f1150aa726c9564b2 not found: ID does not exist" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.176807 4606 scope.go:117] "RemoveContainer" containerID="68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b" Jan 23 17:13:00 crc kubenswrapper[4606]: E0123 17:13:00.180986 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b\": container with ID starting with 68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b not found: ID does not exist" containerID="68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.181029 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b"} err="failed to get container status \"68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b\": rpc error: code = NotFound desc = could not find container \"68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b\": container with ID starting with 68fa7d2582e013a1276b5b38f9165a0fa974b2a8287bcdc1cb3fb94f0ed5d22b not found: ID does not exist" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.226789 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-sb\") pod \"f2813a42-8848-431d-aee7-f6b572becd0f\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.226869 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-swift-storage-0\") pod \"f2813a42-8848-431d-aee7-f6b572becd0f\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.226941 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nf97\" (UniqueName: \"kubernetes.io/projected/f2813a42-8848-431d-aee7-f6b572becd0f-kube-api-access-6nf97\") pod \"f2813a42-8848-431d-aee7-f6b572becd0f\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.226979 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-svc\") pod \"f2813a42-8848-431d-aee7-f6b572becd0f\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.227055 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-config\") pod \"f2813a42-8848-431d-aee7-f6b572becd0f\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.227102 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-nb\") pod \"f2813a42-8848-431d-aee7-f6b572becd0f\" (UID: \"f2813a42-8848-431d-aee7-f6b572becd0f\") " Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.234816 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2813a42-8848-431d-aee7-f6b572becd0f-kube-api-access-6nf97" (OuterVolumeSpecName: "kube-api-access-6nf97") pod "f2813a42-8848-431d-aee7-f6b572becd0f" (UID: "f2813a42-8848-431d-aee7-f6b572becd0f"). InnerVolumeSpecName "kube-api-access-6nf97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.305358 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-config" (OuterVolumeSpecName: "config") pod "f2813a42-8848-431d-aee7-f6b572becd0f" (UID: "f2813a42-8848-431d-aee7-f6b572becd0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.321056 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2813a42-8848-431d-aee7-f6b572becd0f" (UID: "f2813a42-8848-431d-aee7-f6b572becd0f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.323546 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2813a42-8848-431d-aee7-f6b572becd0f" (UID: "f2813a42-8848-431d-aee7-f6b572becd0f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.326249 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2813a42-8848-431d-aee7-f6b572becd0f" (UID: "f2813a42-8848-431d-aee7-f6b572becd0f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.329092 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.329124 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nf97\" (UniqueName: \"kubernetes.io/projected/f2813a42-8848-431d-aee7-f6b572becd0f-kube-api-access-6nf97\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.329135 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.329145 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.329153 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.338096 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2813a42-8848-431d-aee7-f6b572becd0f" (UID: "f2813a42-8848-431d-aee7-f6b572becd0f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.435319 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2813a42-8848-431d-aee7-f6b572becd0f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.561308 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-l26v9"] Jan 23 17:13:00 crc kubenswrapper[4606]: I0123 17:13:00.571583 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-l26v9"] Jan 23 17:13:01 crc kubenswrapper[4606]: I0123 17:13:01.123468 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerStarted","Data":"504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339"} Jan 23 17:13:02 crc kubenswrapper[4606]: I0123 17:13:02.380017 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2813a42-8848-431d-aee7-f6b572becd0f" path="/var/lib/kubelet/pods/f2813a42-8848-431d-aee7-f6b572becd0f/volumes" Jan 23 17:13:03 crc kubenswrapper[4606]: I0123 17:13:03.142961 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerStarted","Data":"9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285"} Jan 23 17:13:03 crc kubenswrapper[4606]: I0123 17:13:03.143493 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:13:03 crc kubenswrapper[4606]: I0123 17:13:03.162654 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.022558416 podStartE2EDuration="6.162582939s" podCreationTimestamp="2026-01-23 17:12:57 +0000 UTC" firstStartedPulling="2026-01-23 17:12:58.037938596 +0000 UTC m=+1212.038131002" lastFinishedPulling="2026-01-23 17:13:02.177963119 +0000 UTC m=+1216.178155525" observedRunningTime="2026-01-23 17:13:03.161323319 +0000 UTC m=+1217.161515735" watchObservedRunningTime="2026-01-23 17:13:03.162582939 +0000 UTC m=+1217.162775345" Jan 23 17:13:04 crc kubenswrapper[4606]: I0123 17:13:04.151548 4606 generic.go:334] "Generic (PLEG): container finished" podID="00e344f2-398e-4610-b627-97a98e1de0d7" containerID="111d88bb0611c3f9bc820889266cfd327a010e6efd18e5b74deeccef116c060e" exitCode=0 Jan 23 17:13:04 crc kubenswrapper[4606]: I0123 17:13:04.151960 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8fb92" event={"ID":"00e344f2-398e-4610-b627-97a98e1de0d7","Type":"ContainerDied","Data":"111d88bb0611c3f9bc820889266cfd327a010e6efd18e5b74deeccef116c060e"} Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.197440 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.203291 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.205497 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.584794 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.736220 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-combined-ca-bundle\") pod \"00e344f2-398e-4610-b627-97a98e1de0d7\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.736272 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnhrp\" (UniqueName: \"kubernetes.io/projected/00e344f2-398e-4610-b627-97a98e1de0d7-kube-api-access-gnhrp\") pod \"00e344f2-398e-4610-b627-97a98e1de0d7\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.736355 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-scripts\") pod \"00e344f2-398e-4610-b627-97a98e1de0d7\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.736374 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-config-data\") pod \"00e344f2-398e-4610-b627-97a98e1de0d7\" (UID: \"00e344f2-398e-4610-b627-97a98e1de0d7\") " Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.752071 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e344f2-398e-4610-b627-97a98e1de0d7-kube-api-access-gnhrp" (OuterVolumeSpecName: "kube-api-access-gnhrp") pod "00e344f2-398e-4610-b627-97a98e1de0d7" (UID: "00e344f2-398e-4610-b627-97a98e1de0d7"). InnerVolumeSpecName "kube-api-access-gnhrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.754047 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-scripts" (OuterVolumeSpecName: "scripts") pod "00e344f2-398e-4610-b627-97a98e1de0d7" (UID: "00e344f2-398e-4610-b627-97a98e1de0d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.775966 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00e344f2-398e-4610-b627-97a98e1de0d7" (UID: "00e344f2-398e-4610-b627-97a98e1de0d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.777722 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-config-data" (OuterVolumeSpecName: "config-data") pod "00e344f2-398e-4610-b627-97a98e1de0d7" (UID: "00e344f2-398e-4610-b627-97a98e1de0d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.839705 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.839743 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.839779 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00e344f2-398e-4610-b627-97a98e1de0d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:05 crc kubenswrapper[4606]: I0123 17:13:05.839792 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnhrp\" (UniqueName: \"kubernetes.io/projected/00e344f2-398e-4610-b627-97a98e1de0d7-kube-api-access-gnhrp\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.178075 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8fb92" Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.178090 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8fb92" event={"ID":"00e344f2-398e-4610-b627-97a98e1de0d7","Type":"ContainerDied","Data":"bbaf935d11ef0711f940fe14a14257cf84892307b7361e97dfa4c8d22afd198e"} Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.178136 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbaf935d11ef0711f940fe14a14257cf84892307b7361e97dfa4c8d22afd198e" Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.186516 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.390102 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.390305 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-log" containerID="cri-o://9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33" gracePeriod=30 Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.390690 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-api" containerID="cri-o://1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028" gracePeriod=30 Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.404659 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.405105 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fa0989ad-8e77-483c-93a2-ce76f542e610" containerName="nova-scheduler-scheduler" containerID="cri-o://50683fe2a869323608f2b96616f87d80e75a7739e0e4772c5568a969ac813c77" gracePeriod=30 Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.457451 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:13:06 crc kubenswrapper[4606]: I0123 17:13:06.985735 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.076082 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-combined-ca-bundle\") pod \"34b53c46-2301-4f89-b428-aa523134e940\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.076508 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b53c46-2301-4f89-b428-aa523134e940-logs\") pod \"34b53c46-2301-4f89-b428-aa523134e940\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.076569 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-config-data\") pod \"34b53c46-2301-4f89-b428-aa523134e940\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.076721 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-internal-tls-certs\") pod \"34b53c46-2301-4f89-b428-aa523134e940\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.076812 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-public-tls-certs\") pod \"34b53c46-2301-4f89-b428-aa523134e940\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.076984 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjh9t\" (UniqueName: \"kubernetes.io/projected/34b53c46-2301-4f89-b428-aa523134e940-kube-api-access-zjh9t\") pod \"34b53c46-2301-4f89-b428-aa523134e940\" (UID: \"34b53c46-2301-4f89-b428-aa523134e940\") " Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.077081 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b53c46-2301-4f89-b428-aa523134e940-logs" (OuterVolumeSpecName: "logs") pod "34b53c46-2301-4f89-b428-aa523134e940" (UID: "34b53c46-2301-4f89-b428-aa523134e940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.077748 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34b53c46-2301-4f89-b428-aa523134e940-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.093811 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b53c46-2301-4f89-b428-aa523134e940-kube-api-access-zjh9t" (OuterVolumeSpecName: "kube-api-access-zjh9t") pod "34b53c46-2301-4f89-b428-aa523134e940" (UID: "34b53c46-2301-4f89-b428-aa523134e940"). InnerVolumeSpecName "kube-api-access-zjh9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.133941 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-config-data" (OuterVolumeSpecName: "config-data") pod "34b53c46-2301-4f89-b428-aa523134e940" (UID: "34b53c46-2301-4f89-b428-aa523134e940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.179410 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "34b53c46-2301-4f89-b428-aa523134e940" (UID: "34b53c46-2301-4f89-b428-aa523134e940"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.180022 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34b53c46-2301-4f89-b428-aa523134e940" (UID: "34b53c46-2301-4f89-b428-aa523134e940"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.181460 4606 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.181482 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjh9t\" (UniqueName: \"kubernetes.io/projected/34b53c46-2301-4f89-b428-aa523134e940-kube-api-access-zjh9t\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.181494 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.181504 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.195263 4606 generic.go:334] "Generic (PLEG): container finished" podID="34b53c46-2301-4f89-b428-aa523134e940" containerID="1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028" exitCode=0 Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.195333 4606 generic.go:334] "Generic (PLEG): container finished" podID="34b53c46-2301-4f89-b428-aa523134e940" containerID="9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33" exitCode=143 Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.195534 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.195722 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34b53c46-2301-4f89-b428-aa523134e940","Type":"ContainerDied","Data":"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028"} Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.195825 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34b53c46-2301-4f89-b428-aa523134e940","Type":"ContainerDied","Data":"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33"} Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.195885 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34b53c46-2301-4f89-b428-aa523134e940","Type":"ContainerDied","Data":"9152fa37eced82927a631b7b32792765740132e17214d90404ff363d1bc25b21"} Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.195954 4606 scope.go:117] "RemoveContainer" containerID="1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.207781 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "34b53c46-2301-4f89-b428-aa523134e940" (UID: "34b53c46-2301-4f89-b428-aa523134e940"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.222775 4606 scope.go:117] "RemoveContainer" containerID="9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.264988 4606 scope.go:117] "RemoveContainer" containerID="1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028" Jan 23 17:13:07 crc kubenswrapper[4606]: E0123 17:13:07.265691 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028\": container with ID starting with 1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028 not found: ID does not exist" containerID="1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.265727 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028"} err="failed to get container status \"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028\": rpc error: code = NotFound desc = could not find container \"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028\": container with ID starting with 1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028 not found: ID does not exist" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.265752 4606 scope.go:117] "RemoveContainer" containerID="9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33" Jan 23 17:13:07 crc kubenswrapper[4606]: E0123 17:13:07.267577 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33\": container with ID starting with 9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33 not found: ID does not exist" containerID="9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.267670 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33"} err="failed to get container status \"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33\": rpc error: code = NotFound desc = could not find container \"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33\": container with ID starting with 9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33 not found: ID does not exist" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.267689 4606 scope.go:117] "RemoveContainer" containerID="1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.268000 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028"} err="failed to get container status \"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028\": rpc error: code = NotFound desc = could not find container \"1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028\": container with ID starting with 1bf28cc5f4b01a9b85715a3f268ea1969f548080b1a39c0a1f0c7413388ed028 not found: ID does not exist" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.268019 4606 scope.go:117] "RemoveContainer" containerID="9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.271081 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33"} err="failed to get container status \"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33\": rpc error: code = NotFound desc = could not find container \"9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33\": container with ID starting with 9cbbbe2a2c0c58b251be5786df481966898879c54c587caad9b47e0596be8b33 not found: ID does not exist" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.282791 4606 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34b53c46-2301-4f89-b428-aa523134e940-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.572288 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.580887 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.602864 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 17:13:07 crc kubenswrapper[4606]: E0123 17:13:07.603329 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2813a42-8848-431d-aee7-f6b572becd0f" containerName="dnsmasq-dns" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603351 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2813a42-8848-431d-aee7-f6b572becd0f" containerName="dnsmasq-dns" Jan 23 17:13:07 crc kubenswrapper[4606]: E0123 17:13:07.603365 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-log" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603374 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-log" Jan 23 17:13:07 crc kubenswrapper[4606]: E0123 17:13:07.603384 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-api" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603392 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-api" Jan 23 17:13:07 crc kubenswrapper[4606]: E0123 17:13:07.603426 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e344f2-398e-4610-b627-97a98e1de0d7" containerName="nova-manage" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603434 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e344f2-398e-4610-b627-97a98e1de0d7" containerName="nova-manage" Jan 23 17:13:07 crc kubenswrapper[4606]: E0123 17:13:07.603450 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2813a42-8848-431d-aee7-f6b572becd0f" containerName="init" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603459 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2813a42-8848-431d-aee7-f6b572becd0f" containerName="init" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603710 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2813a42-8848-431d-aee7-f6b572becd0f" containerName="dnsmasq-dns" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603736 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-api" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603751 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b53c46-2301-4f89-b428-aa523134e940" containerName="nova-api-log" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.603765 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e344f2-398e-4610-b627-97a98e1de0d7" containerName="nova-manage" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.604962 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.606708 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.607258 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.613698 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.622210 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.693754 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-logs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.693828 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vcf\" (UniqueName: \"kubernetes.io/projected/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-kube-api-access-n5vcf\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.693965 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-config-data\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.694161 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.694317 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.694361 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-public-tls-certs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.795696 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.796012 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-public-tls-certs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.796088 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-logs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.796126 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vcf\" (UniqueName: \"kubernetes.io/projected/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-kube-api-access-n5vcf\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.796158 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-config-data\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.796231 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.796687 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-logs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.799270 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.800677 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-public-tls-certs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.801263 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-config-data\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.803421 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.836234 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vcf\" (UniqueName: \"kubernetes.io/projected/e390ccaf-fe1a-4fd0-b19f-64d5d43b2005-kube-api-access-n5vcf\") pod \"nova-api-0\" (UID: \"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005\") " pod="openstack/nova-api-0" Jan 23 17:13:07 crc kubenswrapper[4606]: I0123 17:13:07.925124 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 17:13:08 crc kubenswrapper[4606]: I0123 17:13:08.207498 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-log" containerID="cri-o://b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d" gracePeriod=30 Jan 23 17:13:08 crc kubenswrapper[4606]: I0123 17:13:08.207642 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-metadata" containerID="cri-o://7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6" gracePeriod=30 Jan 23 17:13:08 crc kubenswrapper[4606]: I0123 17:13:08.379945 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b53c46-2301-4f89-b428-aa523134e940" path="/var/lib/kubelet/pods/34b53c46-2301-4f89-b428-aa523134e940/volumes" Jan 23 17:13:08 crc kubenswrapper[4606]: I0123 17:13:08.389236 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 17:13:09 crc kubenswrapper[4606]: I0123 17:13:09.219762 4606 generic.go:334] "Generic (PLEG): container finished" podID="5a532327-312c-4924-894a-dc9058bb09f8" containerID="b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d" exitCode=143 Jan 23 17:13:09 crc kubenswrapper[4606]: I0123 17:13:09.219839 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a532327-312c-4924-894a-dc9058bb09f8","Type":"ContainerDied","Data":"b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d"} Jan 23 17:13:09 crc kubenswrapper[4606]: I0123 17:13:09.224565 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005","Type":"ContainerStarted","Data":"2f0e0943bf61c096a2ed91b7ba63db3b0251c674aff9d47f24b670ad280919db"} Jan 23 17:13:09 crc kubenswrapper[4606]: I0123 17:13:09.224692 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005","Type":"ContainerStarted","Data":"d33a27716de5e77000062df041150b18e8a0291a451cb3110d8fd6bf92ce4f29"} Jan 23 17:13:09 crc kubenswrapper[4606]: I0123 17:13:09.224707 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e390ccaf-fe1a-4fd0-b19f-64d5d43b2005","Type":"ContainerStarted","Data":"b656a9f63b2ccdb616785f063052a1838a995c0073f8d90ae2a30edc0ec3e841"} Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.248051 4606 generic.go:334] "Generic (PLEG): container finished" podID="fa0989ad-8e77-483c-93a2-ce76f542e610" containerID="50683fe2a869323608f2b96616f87d80e75a7739e0e4772c5568a969ac813c77" exitCode=0 Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.250026 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa0989ad-8e77-483c-93a2-ce76f542e610","Type":"ContainerDied","Data":"50683fe2a869323608f2b96616f87d80e75a7739e0e4772c5568a969ac813c77"} Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.536176 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.552230 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.552211535 podStartE2EDuration="3.552211535s" podCreationTimestamp="2026-01-23 17:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:13:09.282306124 +0000 UTC m=+1223.282498540" watchObservedRunningTime="2026-01-23 17:13:10.552211535 +0000 UTC m=+1224.552403941" Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.567306 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-config-data\") pod \"fa0989ad-8e77-483c-93a2-ce76f542e610\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.567478 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-combined-ca-bundle\") pod \"fa0989ad-8e77-483c-93a2-ce76f542e610\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.568110 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv9bg\" (UniqueName: \"kubernetes.io/projected/fa0989ad-8e77-483c-93a2-ce76f542e610-kube-api-access-zv9bg\") pod \"fa0989ad-8e77-483c-93a2-ce76f542e610\" (UID: \"fa0989ad-8e77-483c-93a2-ce76f542e610\") " Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.573013 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa0989ad-8e77-483c-93a2-ce76f542e610-kube-api-access-zv9bg" (OuterVolumeSpecName: "kube-api-access-zv9bg") pod "fa0989ad-8e77-483c-93a2-ce76f542e610" (UID: "fa0989ad-8e77-483c-93a2-ce76f542e610"). InnerVolumeSpecName "kube-api-access-zv9bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.604021 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-config-data" (OuterVolumeSpecName: "config-data") pod "fa0989ad-8e77-483c-93a2-ce76f542e610" (UID: "fa0989ad-8e77-483c-93a2-ce76f542e610"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.605248 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa0989ad-8e77-483c-93a2-ce76f542e610" (UID: "fa0989ad-8e77-483c-93a2-ce76f542e610"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.670826 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.670855 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv9bg\" (UniqueName: \"kubernetes.io/projected/fa0989ad-8e77-483c-93a2-ce76f542e610-kube-api-access-zv9bg\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:10 crc kubenswrapper[4606]: I0123 17:13:10.670866 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa0989ad-8e77-483c-93a2-ce76f542e610-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.258532 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa0989ad-8e77-483c-93a2-ce76f542e610","Type":"ContainerDied","Data":"5db9177e3dc733b78acb18f8411fe55c750f30c03be15b78007ea0f1c7f3dd13"} Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.259949 4606 scope.go:117] "RemoveContainer" containerID="50683fe2a869323608f2b96616f87d80e75a7739e0e4772c5568a969ac813c77" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.258801 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.312275 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.344458 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.355512 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:13:11 crc kubenswrapper[4606]: E0123 17:13:11.356173 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa0989ad-8e77-483c-93a2-ce76f542e610" containerName="nova-scheduler-scheduler" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.356199 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa0989ad-8e77-483c-93a2-ce76f542e610" containerName="nova-scheduler-scheduler" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.356410 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa0989ad-8e77-483c-93a2-ce76f542e610" containerName="nova-scheduler-scheduler" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.363701 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.363938 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.364838 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:37320->10.217.0.197:8775: read: connection reset by peer" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.364943 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:37314->10.217.0.197:8775: read: connection reset by peer" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.366515 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.383911 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-config-data\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.384638 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.384917 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7fjz\" (UniqueName: \"kubernetes.io/projected/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-kube-api-access-d7fjz\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.495035 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-config-data\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.495121 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.495240 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7fjz\" (UniqueName: \"kubernetes.io/projected/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-kube-api-access-d7fjz\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.506605 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-config-data\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.515118 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.520006 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7fjz\" (UniqueName: \"kubernetes.io/projected/6a99d9eb-47df-471a-80e7-ae2b9e5ef75d-kube-api-access-d7fjz\") pod \"nova-scheduler-0\" (UID: \"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d\") " pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.599075 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 17:13:11 crc kubenswrapper[4606]: I0123 17:13:11.845982 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.009787 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-nova-metadata-tls-certs\") pod \"5a532327-312c-4924-894a-dc9058bb09f8\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.009909 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjfgw\" (UniqueName: \"kubernetes.io/projected/5a532327-312c-4924-894a-dc9058bb09f8-kube-api-access-gjfgw\") pod \"5a532327-312c-4924-894a-dc9058bb09f8\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.009947 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a532327-312c-4924-894a-dc9058bb09f8-logs\") pod \"5a532327-312c-4924-894a-dc9058bb09f8\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.010107 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-combined-ca-bundle\") pod \"5a532327-312c-4924-894a-dc9058bb09f8\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.010151 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-config-data\") pod \"5a532327-312c-4924-894a-dc9058bb09f8\" (UID: \"5a532327-312c-4924-894a-dc9058bb09f8\") " Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.011885 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a532327-312c-4924-894a-dc9058bb09f8-logs" (OuterVolumeSpecName: "logs") pod "5a532327-312c-4924-894a-dc9058bb09f8" (UID: "5a532327-312c-4924-894a-dc9058bb09f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.023371 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a532327-312c-4924-894a-dc9058bb09f8-kube-api-access-gjfgw" (OuterVolumeSpecName: "kube-api-access-gjfgw") pod "5a532327-312c-4924-894a-dc9058bb09f8" (UID: "5a532327-312c-4924-894a-dc9058bb09f8"). InnerVolumeSpecName "kube-api-access-gjfgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.046871 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-config-data" (OuterVolumeSpecName: "config-data") pod "5a532327-312c-4924-894a-dc9058bb09f8" (UID: "5a532327-312c-4924-894a-dc9058bb09f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.047389 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a532327-312c-4924-894a-dc9058bb09f8" (UID: "5a532327-312c-4924-894a-dc9058bb09f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.075938 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5a532327-312c-4924-894a-dc9058bb09f8" (UID: "5a532327-312c-4924-894a-dc9058bb09f8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.113372 4606 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.113427 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjfgw\" (UniqueName: \"kubernetes.io/projected/5a532327-312c-4924-894a-dc9058bb09f8-kube-api-access-gjfgw\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.113438 4606 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a532327-312c-4924-894a-dc9058bb09f8-logs\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.113448 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.113456 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a532327-312c-4924-894a-dc9058bb09f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.140707 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.269309 4606 generic.go:334] "Generic (PLEG): container finished" podID="5a532327-312c-4924-894a-dc9058bb09f8" containerID="7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6" exitCode=0 Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.269371 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.269386 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a532327-312c-4924-894a-dc9058bb09f8","Type":"ContainerDied","Data":"7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6"} Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.269801 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a532327-312c-4924-894a-dc9058bb09f8","Type":"ContainerDied","Data":"b864b7d9263796390dc6b43ef734029f67fba3decb538d5615bce24a8db79f5e"} Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.269824 4606 scope.go:117] "RemoveContainer" containerID="7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.270737 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d","Type":"ContainerStarted","Data":"b67b6a219a4ce6158d49c99730f71396aa253605878e7aee2a807273516104d0"} Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.303425 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.303730 4606 scope.go:117] "RemoveContainer" containerID="b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.319238 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.329969 4606 scope.go:117] "RemoveContainer" containerID="7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.336341 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:13:12 crc kubenswrapper[4606]: E0123 17:13:12.336910 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-metadata" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.336931 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-metadata" Jan 23 17:13:12 crc kubenswrapper[4606]: E0123 17:13:12.336954 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-log" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.336960 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-log" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.337138 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-metadata" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.337163 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a532327-312c-4924-894a-dc9058bb09f8" containerName="nova-metadata-log" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.338121 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.339713 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 17:13:12 crc kubenswrapper[4606]: E0123 17:13:12.339921 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6\": container with ID starting with 7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6 not found: ID does not exist" containerID="7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.339967 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6"} err="failed to get container status \"7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6\": rpc error: code = NotFound desc = could not find container \"7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6\": container with ID starting with 7a1adb4b4dcf7833d97f732bf5c325b520b069214015966462c12d29fe2135f6 not found: ID does not exist" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.339996 4606 scope.go:117] "RemoveContainer" containerID="b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.341191 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 17:13:12 crc kubenswrapper[4606]: E0123 17:13:12.349729 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d\": container with ID starting with b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d not found: ID does not exist" containerID="b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.349792 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d"} err="failed to get container status \"b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d\": rpc error: code = NotFound desc = could not find container \"b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d\": container with ID starting with b396bd041e87bdebc6aeb6cfde1b7d0814dd7e1b873a758a711512ff91fb3c6d not found: ID does not exist" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.349824 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.384738 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a532327-312c-4924-894a-dc9058bb09f8" path="/var/lib/kubelet/pods/5a532327-312c-4924-894a-dc9058bb09f8/volumes" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.385692 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa0989ad-8e77-483c-93a2-ce76f542e610" path="/var/lib/kubelet/pods/fa0989ad-8e77-483c-93a2-ce76f542e610/volumes" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.520096 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e591e35b-989c-4122-bd2b-7e732c294b72-logs\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.520193 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scm7m\" (UniqueName: \"kubernetes.io/projected/e591e35b-989c-4122-bd2b-7e732c294b72-kube-api-access-scm7m\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.520235 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.520316 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-config-data\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.520403 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.622500 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.622607 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e591e35b-989c-4122-bd2b-7e732c294b72-logs\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.622640 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scm7m\" (UniqueName: \"kubernetes.io/projected/e591e35b-989c-4122-bd2b-7e732c294b72-kube-api-access-scm7m\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.622667 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.622734 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-config-data\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.623641 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e591e35b-989c-4122-bd2b-7e732c294b72-logs\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.638227 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.639022 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-config-data\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.641279 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e591e35b-989c-4122-bd2b-7e732c294b72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.653486 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scm7m\" (UniqueName: \"kubernetes.io/projected/e591e35b-989c-4122-bd2b-7e732c294b72-kube-api-access-scm7m\") pod \"nova-metadata-0\" (UID: \"e591e35b-989c-4122-bd2b-7e732c294b72\") " pod="openstack/nova-metadata-0" Jan 23 17:13:12 crc kubenswrapper[4606]: I0123 17:13:12.659525 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 17:13:13 crc kubenswrapper[4606]: I0123 17:13:13.115023 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 17:13:13 crc kubenswrapper[4606]: W0123 17:13:13.120066 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode591e35b_989c_4122_bd2b_7e732c294b72.slice/crio-fe733ec1c8abef6602a8573d40657212f8170940abbd174652eae5874b307836 WatchSource:0}: Error finding container fe733ec1c8abef6602a8573d40657212f8170940abbd174652eae5874b307836: Status 404 returned error can't find the container with id fe733ec1c8abef6602a8573d40657212f8170940abbd174652eae5874b307836 Jan 23 17:13:13 crc kubenswrapper[4606]: I0123 17:13:13.281831 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e591e35b-989c-4122-bd2b-7e732c294b72","Type":"ContainerStarted","Data":"fe733ec1c8abef6602a8573d40657212f8170940abbd174652eae5874b307836"} Jan 23 17:13:13 crc kubenswrapper[4606]: I0123 17:13:13.283332 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a99d9eb-47df-471a-80e7-ae2b9e5ef75d","Type":"ContainerStarted","Data":"e0caa5f6b7c3035aae9376ca5368baf24375872d303f23137e6cc10dad6e5fda"} Jan 23 17:13:14 crc kubenswrapper[4606]: I0123 17:13:14.295825 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e591e35b-989c-4122-bd2b-7e732c294b72","Type":"ContainerStarted","Data":"a858d44b855024fb65b1155ca28e9151e09532deda728145277db876c982ca61"} Jan 23 17:13:14 crc kubenswrapper[4606]: I0123 17:13:14.296174 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e591e35b-989c-4122-bd2b-7e732c294b72","Type":"ContainerStarted","Data":"ca2b552e1798d89e1ddc4e2b957f4d600eb535a47d61a33eba9f72f656c68355"} Jan 23 17:13:14 crc kubenswrapper[4606]: I0123 17:13:14.318119 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.318095867 podStartE2EDuration="3.318095867s" podCreationTimestamp="2026-01-23 17:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:13:13.304695056 +0000 UTC m=+1227.304887462" watchObservedRunningTime="2026-01-23 17:13:14.318095867 +0000 UTC m=+1228.318288273" Jan 23 17:13:14 crc kubenswrapper[4606]: I0123 17:13:14.320372 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.32034957 podStartE2EDuration="2.32034957s" podCreationTimestamp="2026-01-23 17:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:13:14.31114465 +0000 UTC m=+1228.311337076" watchObservedRunningTime="2026-01-23 17:13:14.32034957 +0000 UTC m=+1228.320541976" Jan 23 17:13:16 crc kubenswrapper[4606]: I0123 17:13:16.599936 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 17:13:17 crc kubenswrapper[4606]: I0123 17:13:17.659899 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 17:13:17 crc kubenswrapper[4606]: I0123 17:13:17.660977 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 17:13:17 crc kubenswrapper[4606]: I0123 17:13:17.925886 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 17:13:17 crc kubenswrapper[4606]: I0123 17:13:17.926200 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 17:13:18 crc kubenswrapper[4606]: I0123 17:13:18.939858 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e390ccaf-fe1a-4fd0-b19f-64d5d43b2005" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:13:18 crc kubenswrapper[4606]: I0123 17:13:18.939963 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e390ccaf-fe1a-4fd0-b19f-64d5d43b2005" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:13:21 crc kubenswrapper[4606]: I0123 17:13:21.600278 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 17:13:21 crc kubenswrapper[4606]: I0123 17:13:21.637625 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 17:13:22 crc kubenswrapper[4606]: I0123 17:13:22.433899 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 17:13:22 crc kubenswrapper[4606]: I0123 17:13:22.660322 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 17:13:22 crc kubenswrapper[4606]: I0123 17:13:22.660375 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 17:13:23 crc kubenswrapper[4606]: I0123 17:13:23.669954 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e591e35b-989c-4122-bd2b-7e732c294b72" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:13:23 crc kubenswrapper[4606]: I0123 17:13:23.680361 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e591e35b-989c-4122-bd2b-7e732c294b72" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:13:27 crc kubenswrapper[4606]: I0123 17:13:27.542722 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 17:13:27 crc kubenswrapper[4606]: I0123 17:13:27.934653 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 17:13:27 crc kubenswrapper[4606]: I0123 17:13:27.936331 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 17:13:27 crc kubenswrapper[4606]: I0123 17:13:27.936473 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 17:13:27 crc kubenswrapper[4606]: I0123 17:13:27.960939 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 17:13:28 crc kubenswrapper[4606]: I0123 17:13:28.436438 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 17:13:28 crc kubenswrapper[4606]: I0123 17:13:28.442703 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 17:13:29 crc kubenswrapper[4606]: I0123 17:13:29.934614 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:13:29 crc kubenswrapper[4606]: I0123 17:13:29.934985 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:13:30 crc kubenswrapper[4606]: I0123 17:13:30.520174 4606 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf2813a42-8848-431d-aee7-f6b572becd0f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf2813a42-8848-431d-aee7-f6b572becd0f] : Timed out while waiting for systemd to remove kubepods-besteffort-podf2813a42_8848_431d_aee7_f6b572becd0f.slice" Jan 23 17:13:32 crc kubenswrapper[4606]: I0123 17:13:32.670215 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 17:13:32 crc kubenswrapper[4606]: I0123 17:13:32.674885 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 17:13:32 crc kubenswrapper[4606]: I0123 17:13:32.676951 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 17:13:33 crc kubenswrapper[4606]: I0123 17:13:33.495165 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 17:13:42 crc kubenswrapper[4606]: I0123 17:13:42.375420 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:13:43 crc kubenswrapper[4606]: I0123 17:13:43.500517 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:13:48 crc kubenswrapper[4606]: I0123 17:13:48.142776 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerName="rabbitmq" containerID="cri-o://5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5" gracePeriod=604795 Jan 23 17:13:48 crc kubenswrapper[4606]: I0123 17:13:48.845296 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerName="rabbitmq" containerID="cri-o://bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f" gracePeriod=604795 Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.702261 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.709047 4606 generic.go:334] "Generic (PLEG): container finished" podID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerID="5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5" exitCode=0 Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.709091 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8","Type":"ContainerDied","Data":"5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5"} Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.709122 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8","Type":"ContainerDied","Data":"81f46f171c07e571c99c09658d060c6dfd773a605fceac16e0952b0b72680e24"} Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.709122 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.709162 4606 scope.go:117] "RemoveContainer" containerID="5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.779785 4606 scope.go:117] "RemoveContainer" containerID="6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.837328 4606 scope.go:117] "RemoveContainer" containerID="5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5" Jan 23 17:13:54 crc kubenswrapper[4606]: E0123 17:13:54.837825 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5\": container with ID starting with 5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5 not found: ID does not exist" containerID="5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.837862 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5"} err="failed to get container status \"5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5\": rpc error: code = NotFound desc = could not find container \"5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5\": container with ID starting with 5ebecab66c396098a97b7c32397c23357b5c125f1bd3f66768ca64601ed3d0f5 not found: ID does not exist" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.837888 4606 scope.go:117] "RemoveContainer" containerID="6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b" Jan 23 17:13:54 crc kubenswrapper[4606]: E0123 17:13:54.838560 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b\": container with ID starting with 6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b not found: ID does not exist" containerID="6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.838639 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b"} err="failed to get container status \"6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b\": rpc error: code = NotFound desc = could not find container \"6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b\": container with ID starting with 6e3c2049c1808305c7cda1d8688a4edcd56265f42ed8244a6aa2ea7f306f3d6b not found: ID does not exist" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845240 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-config-data\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845288 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-plugins\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845348 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-erlang-cookie-secret\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845377 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-tls\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845406 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845444 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-confd\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845514 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-erlang-cookie\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845555 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-pod-info\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845576 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-server-conf\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845666 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzhb2\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-kube-api-access-wzhb2\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.845684 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-plugins-conf\") pod \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\" (UID: \"cb9493ec-ca68-4e7c-be90-eb6974f2f0b8\") " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.862525 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.867167 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.867305 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.874769 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-kube-api-access-wzhb2" (OuterVolumeSpecName: "kube-api-access-wzhb2") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "kube-api-access-wzhb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.876113 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.882660 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.882749 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-pod-info" (OuterVolumeSpecName: "pod-info") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.888858 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.939890 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-config-data" (OuterVolumeSpecName: "config-data") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.953926 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.954737 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.954872 4606 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.954956 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.955086 4606 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.955185 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.955275 4606 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.955361 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzhb2\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-kube-api-access-wzhb2\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.955449 4606 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:54 crc kubenswrapper[4606]: I0123 17:13:54.971935 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-server-conf" (OuterVolumeSpecName: "server-conf") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.007693 4606 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.019426 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" (UID: "cb9493ec-ca68-4e7c-be90-eb6974f2f0b8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.056291 4606 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.056570 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.056671 4606 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.304636 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364232 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8hdn\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-kube-api-access-n8hdn\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364293 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-plugins\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364332 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-server-conf\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364375 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-confd\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364408 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf778ec6-6b7b-492e-948c-0314dc6b26ba-erlang-cookie-secret\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364442 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-config-data\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364477 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-tls\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364504 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-plugins-conf\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364546 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364608 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-erlang-cookie\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.364651 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf778ec6-6b7b-492e-948c-0314dc6b26ba-pod-info\") pod \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\" (UID: \"cf778ec6-6b7b-492e-948c-0314dc6b26ba\") " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.365131 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.365302 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.375642 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-kube-api-access-n8hdn" (OuterVolumeSpecName: "kube-api-access-n8hdn") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "kube-api-access-n8hdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.375805 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.376193 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.376707 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.387511 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf778ec6-6b7b-492e-948c-0314dc6b26ba-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.388815 4606 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf778ec6-6b7b-492e-948c-0314dc6b26ba-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.388844 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.388855 4606 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.388884 4606 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.388896 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.388907 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8hdn\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-kube-api-access-n8hdn\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.388918 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.395929 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cf778ec6-6b7b-492e-948c-0314dc6b26ba-pod-info" (OuterVolumeSpecName: "pod-info") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.434836 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-config-data" (OuterVolumeSpecName: "config-data") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.464989 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.482106 4606 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.495972 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.496204 4606 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.496289 4606 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf778ec6-6b7b-492e-948c-0314dc6b26ba-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.510825 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.560511 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: E0123 17:13:55.562841 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerName="rabbitmq" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.562900 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerName="rabbitmq" Jan 23 17:13:55 crc kubenswrapper[4606]: E0123 17:13:55.562949 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerName="setup-container" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.563881 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerName="setup-container" Jan 23 17:13:55 crc kubenswrapper[4606]: E0123 17:13:55.563894 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerName="rabbitmq" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.563925 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerName="rabbitmq" Jan 23 17:13:55 crc kubenswrapper[4606]: E0123 17:13:55.563961 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerName="setup-container" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.563970 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerName="setup-container" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.564632 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" containerName="rabbitmq" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.564654 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerName="rabbitmq" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.568597 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.571954 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bbn8s" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.572973 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.573472 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.573784 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.574014 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.580299 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.584569 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.634604 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-server-conf" (OuterVolumeSpecName: "server-conf") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.634678 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.663510 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cf778ec6-6b7b-492e-948c-0314dc6b26ba" (UID: "cf778ec6-6b7b-492e-948c-0314dc6b26ba"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721121 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-config-data\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721190 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721252 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721285 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721309 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2t5\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-kube-api-access-7c2t5\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721330 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721360 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721378 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721403 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721446 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d7661-6513-4b14-9367-9d4ddb8e6d2c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721470 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d7661-6513-4b14-9367-9d4ddb8e6d2c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721559 4606 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf778ec6-6b7b-492e-948c-0314dc6b26ba-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.721719 4606 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf778ec6-6b7b-492e-948c-0314dc6b26ba-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.725931 4606 generic.go:334] "Generic (PLEG): container finished" podID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" containerID="bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f" exitCode=0 Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.726092 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.726194 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf778ec6-6b7b-492e-948c-0314dc6b26ba","Type":"ContainerDied","Data":"bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f"} Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.726327 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf778ec6-6b7b-492e-948c-0314dc6b26ba","Type":"ContainerDied","Data":"307f6ff974e9b8eb96a291c11302ea1a8cec3a6338c7fbe14ffae2727dc6e9a9"} Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.726447 4606 scope.go:117] "RemoveContainer" containerID="bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.751086 4606 scope.go:117] "RemoveContainer" containerID="ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.768637 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.776102 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.797215 4606 scope.go:117] "RemoveContainer" containerID="bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f" Jan 23 17:13:55 crc kubenswrapper[4606]: E0123 17:13:55.797666 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f\": container with ID starting with bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f not found: ID does not exist" containerID="bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.797692 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f"} err="failed to get container status \"bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f\": rpc error: code = NotFound desc = could not find container \"bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f\": container with ID starting with bc62fe43831d0e14eb16795d495bc0e105a0df2a474b9c5a4715aadb4c7a329f not found: ID does not exist" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.797713 4606 scope.go:117] "RemoveContainer" containerID="ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c" Jan 23 17:13:55 crc kubenswrapper[4606]: E0123 17:13:55.802795 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c\": container with ID starting with ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c not found: ID does not exist" containerID="ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.802852 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c"} err="failed to get container status \"ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c\": rpc error: code = NotFound desc = could not find container \"ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c\": container with ID starting with ca273f8425919426be81643ca0ad0cf561569e62c6c2eed8c0ce72dfda52d72c not found: ID does not exist" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.806038 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.807881 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.810502 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.812017 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.812205 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.812324 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.812547 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.812746 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.813000 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qpkp2" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.824154 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.824805 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.824884 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d7661-6513-4b14-9367-9d4ddb8e6d2c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.824917 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d7661-6513-4b14-9367-9d4ddb8e6d2c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825038 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-config-data\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825084 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825162 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825187 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825214 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c2t5\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-kube-api-access-7c2t5\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825246 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825285 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825713 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.825988 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.826525 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.830877 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.831366 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-config-data\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.832104 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.836501 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.836764 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/486d7661-6513-4b14-9367-9d4ddb8e6d2c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.847472 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.864954 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/486d7661-6513-4b14-9367-9d4ddb8e6d2c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.867187 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/486d7661-6513-4b14-9367-9d4ddb8e6d2c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.873481 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c2t5\" (UniqueName: \"kubernetes.io/projected/486d7661-6513-4b14-9367-9d4ddb8e6d2c-kube-api-access-7c2t5\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.898974 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"486d7661-6513-4b14-9367-9d4ddb8e6d2c\") " pod="openstack/rabbitmq-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.926688 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.926756 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/008d9c33-291f-4081-b30f-eb32178efcd0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.926838 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkczm\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-kube-api-access-wkczm\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.926995 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.927072 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.927119 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.927171 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.927220 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/008d9c33-291f-4081-b30f-eb32178efcd0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.927249 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.927307 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.927361 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:55 crc kubenswrapper[4606]: I0123 17:13:55.979490 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030323 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/008d9c33-291f-4081-b30f-eb32178efcd0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030381 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030412 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030442 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030502 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030524 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/008d9c33-291f-4081-b30f-eb32178efcd0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030579 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkczm\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-kube-api-access-wkczm\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030852 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030874 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030904 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.030953 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.031215 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.031603 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.031681 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.032574 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/008d9c33-291f-4081-b30f-eb32178efcd0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.032808 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.033060 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.035249 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.039275 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/008d9c33-291f-4081-b30f-eb32178efcd0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.039433 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/008d9c33-291f-4081-b30f-eb32178efcd0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.040559 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.051063 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkczm\" (UniqueName: \"kubernetes.io/projected/008d9c33-291f-4081-b30f-eb32178efcd0-kube-api-access-wkczm\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.068050 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"008d9c33-291f-4081-b30f-eb32178efcd0\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.166599 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.277335 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nxsb5"] Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.284779 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.290897 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.315301 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nxsb5"] Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.336925 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jdpk\" (UniqueName: \"kubernetes.io/projected/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-kube-api-access-7jdpk\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.337239 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.337383 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.337521 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-config\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.337652 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.337788 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.338028 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.401839 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9493ec-ca68-4e7c-be90-eb6974f2f0b8" path="/var/lib/kubelet/pods/cb9493ec-ca68-4e7c-be90-eb6974f2f0b8/volumes" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.407889 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf778ec6-6b7b-492e-948c-0314dc6b26ba" path="/var/lib/kubelet/pods/cf778ec6-6b7b-492e-948c-0314dc6b26ba/volumes" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.439781 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.439857 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.439904 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-config\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.439923 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.439971 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.440092 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.440154 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jdpk\" (UniqueName: \"kubernetes.io/projected/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-kube-api-access-7jdpk\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.441377 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.442113 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.443447 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.444023 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.444907 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.446080 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-config\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.497494 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.501507 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jdpk\" (UniqueName: \"kubernetes.io/projected/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-kube-api-access-7jdpk\") pod \"dnsmasq-dns-79bd4cc8c9-nxsb5\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: W0123 17:13:56.524845 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod486d7661_6513_4b14_9367_9d4ddb8e6d2c.slice/crio-c7685bd67e5745ea4d63e7b5ef54573f6854814a3c9933a7f6ab674f0afb4f82 WatchSource:0}: Error finding container c7685bd67e5745ea4d63e7b5ef54573f6854814a3c9933a7f6ab674f0afb4f82: Status 404 returned error can't find the container with id c7685bd67e5745ea4d63e7b5ef54573f6854814a3c9933a7f6ab674f0afb4f82 Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.621218 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.757768 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"486d7661-6513-4b14-9367-9d4ddb8e6d2c","Type":"ContainerStarted","Data":"c7685bd67e5745ea4d63e7b5ef54573f6854814a3c9933a7f6ab674f0afb4f82"} Jan 23 17:13:56 crc kubenswrapper[4606]: I0123 17:13:56.798091 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 17:13:56 crc kubenswrapper[4606]: W0123 17:13:56.805937 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod008d9c33_291f_4081_b30f_eb32178efcd0.slice/crio-0db901798c8aca321519177bbe39506f44fb9fbe6959f21198b070beb141267a WatchSource:0}: Error finding container 0db901798c8aca321519177bbe39506f44fb9fbe6959f21198b070beb141267a: Status 404 returned error can't find the container with id 0db901798c8aca321519177bbe39506f44fb9fbe6959f21198b070beb141267a Jan 23 17:13:57 crc kubenswrapper[4606]: I0123 17:13:57.105653 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nxsb5"] Jan 23 17:13:57 crc kubenswrapper[4606]: W0123 17:13:57.106071 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9bb4e0_6228_4bc3_b549_9f5d0ebd5617.slice/crio-d69bce6371cd946941b1f9fc08bf5481bd06d45e83ead7358d211e61c41083d4 WatchSource:0}: Error finding container d69bce6371cd946941b1f9fc08bf5481bd06d45e83ead7358d211e61c41083d4: Status 404 returned error can't find the container with id d69bce6371cd946941b1f9fc08bf5481bd06d45e83ead7358d211e61c41083d4 Jan 23 17:13:57 crc kubenswrapper[4606]: I0123 17:13:57.785512 4606 generic.go:334] "Generic (PLEG): container finished" podID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerID="96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6" exitCode=0 Jan 23 17:13:57 crc kubenswrapper[4606]: I0123 17:13:57.785566 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" event={"ID":"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617","Type":"ContainerDied","Data":"96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6"} Jan 23 17:13:57 crc kubenswrapper[4606]: I0123 17:13:57.785972 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" event={"ID":"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617","Type":"ContainerStarted","Data":"d69bce6371cd946941b1f9fc08bf5481bd06d45e83ead7358d211e61c41083d4"} Jan 23 17:13:57 crc kubenswrapper[4606]: I0123 17:13:57.787886 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"008d9c33-291f-4081-b30f-eb32178efcd0","Type":"ContainerStarted","Data":"0db901798c8aca321519177bbe39506f44fb9fbe6959f21198b070beb141267a"} Jan 23 17:13:58 crc kubenswrapper[4606]: I0123 17:13:58.798912 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"486d7661-6513-4b14-9367-9d4ddb8e6d2c","Type":"ContainerStarted","Data":"f384d100a73fd64c345d5c2b4eb5bf92abab5dc24e0fa63e4ae28930ca597617"} Jan 23 17:13:58 crc kubenswrapper[4606]: I0123 17:13:58.802624 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" event={"ID":"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617","Type":"ContainerStarted","Data":"43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f"} Jan 23 17:13:58 crc kubenswrapper[4606]: I0123 17:13:58.803024 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:13:58 crc kubenswrapper[4606]: I0123 17:13:58.804427 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"008d9c33-291f-4081-b30f-eb32178efcd0","Type":"ContainerStarted","Data":"15fc3d3922b3e2fba4d67c5976dff42875a130630e10f071af1f45cfe446311a"} Jan 23 17:13:58 crc kubenswrapper[4606]: I0123 17:13:58.854192 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" podStartSLOduration=2.8541665739999997 podStartE2EDuration="2.854166574s" podCreationTimestamp="2026-01-23 17:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:13:58.849616846 +0000 UTC m=+1272.849809262" watchObservedRunningTime="2026-01-23 17:13:58.854166574 +0000 UTC m=+1272.854359010" Jan 23 17:13:59 crc kubenswrapper[4606]: I0123 17:13:59.933637 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:13:59 crc kubenswrapper[4606]: I0123 17:13:59.933970 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:13:59 crc kubenswrapper[4606]: I0123 17:13:59.934013 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:13:59 crc kubenswrapper[4606]: I0123 17:13:59.934884 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0aff6847a831032f0ede7590ffa543c8a1f938be08add052c2862cc2b6c92af"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:13:59 crc kubenswrapper[4606]: I0123 17:13:59.934938 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://c0aff6847a831032f0ede7590ffa543c8a1f938be08add052c2862cc2b6c92af" gracePeriod=600 Jan 23 17:14:00 crc kubenswrapper[4606]: I0123 17:14:00.824158 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="c0aff6847a831032f0ede7590ffa543c8a1f938be08add052c2862cc2b6c92af" exitCode=0 Jan 23 17:14:00 crc kubenswrapper[4606]: I0123 17:14:00.824315 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"c0aff6847a831032f0ede7590ffa543c8a1f938be08add052c2862cc2b6c92af"} Jan 23 17:14:00 crc kubenswrapper[4606]: I0123 17:14:00.824481 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"ebadbc612659691d0364633624880b271d3a98bbea2b5aa4b6e90c782af998b4"} Jan 23 17:14:00 crc kubenswrapper[4606]: I0123 17:14:00.824507 4606 scope.go:117] "RemoveContainer" containerID="3cef2b80453103e1f146f2cbf852884e416a5490d931dcd7771b99f87019e13d" Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.623855 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.698694 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tx5pf"] Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.698942 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" podUID="9088f411-fc76-423e-861f-9a6f26c44b25" containerName="dnsmasq-dns" containerID="cri-o://8fbb7b6aa4a1ea6772ee078b3ace3df06610acb82e2f87756376e904a93cd8ce" gracePeriod=10 Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.918202 4606 generic.go:334] "Generic (PLEG): container finished" podID="9088f411-fc76-423e-861f-9a6f26c44b25" containerID="8fbb7b6aa4a1ea6772ee078b3ace3df06610acb82e2f87756376e904a93cd8ce" exitCode=0 Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.918252 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" event={"ID":"9088f411-fc76-423e-861f-9a6f26c44b25","Type":"ContainerDied","Data":"8fbb7b6aa4a1ea6772ee078b3ace3df06610acb82e2f87756376e904a93cd8ce"} Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.951281 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d949ffffc-w6rtg"] Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.957341 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:06 crc kubenswrapper[4606]: I0123 17:14:06.972921 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d949ffffc-w6rtg"] Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.055469 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.055755 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.055797 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnpm2\" (UniqueName: \"kubernetes.io/projected/83c23281-64ab-4274-9278-bdebed7cba7d-kube-api-access-pnpm2\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.055870 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-dns-swift-storage-0\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.055940 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-dns-svc\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.055978 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-config\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.056049 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.157768 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.158109 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.158145 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnpm2\" (UniqueName: \"kubernetes.io/projected/83c23281-64ab-4274-9278-bdebed7cba7d-kube-api-access-pnpm2\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.158189 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-dns-swift-storage-0\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.158262 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-dns-svc\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.158306 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-config\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.158391 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.159407 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.160123 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.160616 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-dns-svc\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.160665 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-dns-swift-storage-0\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.161350 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-config\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.165096 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c23281-64ab-4274-9278-bdebed7cba7d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.187457 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnpm2\" (UniqueName: \"kubernetes.io/projected/83c23281-64ab-4274-9278-bdebed7cba7d-kube-api-access-pnpm2\") pod \"dnsmasq-dns-6d949ffffc-w6rtg\" (UID: \"83c23281-64ab-4274-9278-bdebed7cba7d\") " pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.277004 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.445274 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.565295 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-svc\") pod \"9088f411-fc76-423e-861f-9a6f26c44b25\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.565382 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-nb\") pod \"9088f411-fc76-423e-861f-9a6f26c44b25\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.565407 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h294q\" (UniqueName: \"kubernetes.io/projected/9088f411-fc76-423e-861f-9a6f26c44b25-kube-api-access-h294q\") pod \"9088f411-fc76-423e-861f-9a6f26c44b25\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.565517 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-config\") pod \"9088f411-fc76-423e-861f-9a6f26c44b25\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.566673 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-swift-storage-0\") pod \"9088f411-fc76-423e-861f-9a6f26c44b25\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.566766 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-sb\") pod \"9088f411-fc76-423e-861f-9a6f26c44b25\" (UID: \"9088f411-fc76-423e-861f-9a6f26c44b25\") " Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.589380 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9088f411-fc76-423e-861f-9a6f26c44b25-kube-api-access-h294q" (OuterVolumeSpecName: "kube-api-access-h294q") pod "9088f411-fc76-423e-861f-9a6f26c44b25" (UID: "9088f411-fc76-423e-861f-9a6f26c44b25"). InnerVolumeSpecName "kube-api-access-h294q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.647109 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9088f411-fc76-423e-861f-9a6f26c44b25" (UID: "9088f411-fc76-423e-861f-9a6f26c44b25"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.649886 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9088f411-fc76-423e-861f-9a6f26c44b25" (UID: "9088f411-fc76-423e-861f-9a6f26c44b25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.654628 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9088f411-fc76-423e-861f-9a6f26c44b25" (UID: "9088f411-fc76-423e-861f-9a6f26c44b25"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.665092 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-config" (OuterVolumeSpecName: "config") pod "9088f411-fc76-423e-861f-9a6f26c44b25" (UID: "9088f411-fc76-423e-861f-9a6f26c44b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.669305 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.669325 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.669335 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h294q\" (UniqueName: \"kubernetes.io/projected/9088f411-fc76-423e-861f-9a6f26c44b25-kube-api-access-h294q\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.669344 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.669353 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.679751 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9088f411-fc76-423e-861f-9a6f26c44b25" (UID: "9088f411-fc76-423e-861f-9a6f26c44b25"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.771018 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9088f411-fc76-423e-861f-9a6f26c44b25-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.797793 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d949ffffc-w6rtg"] Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.955828 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" event={"ID":"83c23281-64ab-4274-9278-bdebed7cba7d","Type":"ContainerStarted","Data":"16c6b95c0a6d664f1eca8dce46eb3de8129da21906b2053240c1569101fe7591"} Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.963218 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" event={"ID":"9088f411-fc76-423e-861f-9a6f26c44b25","Type":"ContainerDied","Data":"2751c39eae4b67e93bcc4b96cbc3047838ed56a022da221b0c8a57f8c10248b6"} Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.963272 4606 scope.go:117] "RemoveContainer" containerID="8fbb7b6aa4a1ea6772ee078b3ace3df06610acb82e2f87756376e904a93cd8ce" Jan 23 17:14:07 crc kubenswrapper[4606]: I0123 17:14:07.963448 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-tx5pf" Jan 23 17:14:08 crc kubenswrapper[4606]: I0123 17:14:08.002374 4606 scope.go:117] "RemoveContainer" containerID="db4a98296702ec980b202b54b623154dd0230d7883838250d8118498f9eaa2b4" Jan 23 17:14:08 crc kubenswrapper[4606]: I0123 17:14:08.031862 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tx5pf"] Jan 23 17:14:08 crc kubenswrapper[4606]: I0123 17:14:08.048309 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-tx5pf"] Jan 23 17:14:08 crc kubenswrapper[4606]: I0123 17:14:08.386627 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9088f411-fc76-423e-861f-9a6f26c44b25" path="/var/lib/kubelet/pods/9088f411-fc76-423e-861f-9a6f26c44b25/volumes" Jan 23 17:14:08 crc kubenswrapper[4606]: I0123 17:14:08.979427 4606 generic.go:334] "Generic (PLEG): container finished" podID="83c23281-64ab-4274-9278-bdebed7cba7d" containerID="daec06157f814dfa0fd8e1200bfb7863042e4225dca30f1f1d2504d821ac6c35" exitCode=0 Jan 23 17:14:08 crc kubenswrapper[4606]: I0123 17:14:08.979497 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" event={"ID":"83c23281-64ab-4274-9278-bdebed7cba7d","Type":"ContainerDied","Data":"daec06157f814dfa0fd8e1200bfb7863042e4225dca30f1f1d2504d821ac6c35"} Jan 23 17:14:10 crc kubenswrapper[4606]: I0123 17:14:10.000232 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" event={"ID":"83c23281-64ab-4274-9278-bdebed7cba7d","Type":"ContainerStarted","Data":"e9b7be931a6cb9788668a6c7fe903f3ab553190eea5ced1b367c6dd1b9d0a355"} Jan 23 17:14:10 crc kubenswrapper[4606]: I0123 17:14:10.000703 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:10 crc kubenswrapper[4606]: I0123 17:14:10.031436 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" podStartSLOduration=4.031416522 podStartE2EDuration="4.031416522s" podCreationTimestamp="2026-01-23 17:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:14:10.018298991 +0000 UTC m=+1284.018491387" watchObservedRunningTime="2026-01-23 17:14:10.031416522 +0000 UTC m=+1284.031608928" Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.279375 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d949ffffc-w6rtg" Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.353428 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nxsb5"] Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.358565 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" podUID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerName="dnsmasq-dns" containerID="cri-o://43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f" gracePeriod=10 Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.846320 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.898819 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-svc\") pod \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.899242 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-config\") pod \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.899509 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jdpk\" (UniqueName: \"kubernetes.io/projected/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-kube-api-access-7jdpk\") pod \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.899763 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-nb\") pod \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.900413 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-swift-storage-0\") pod \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.900714 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-sb\") pod \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.900952 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-openstack-edpm-ipam\") pod \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\" (UID: \"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617\") " Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.930871 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-kube-api-access-7jdpk" (OuterVolumeSpecName: "kube-api-access-7jdpk") pod "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" (UID: "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617"). InnerVolumeSpecName "kube-api-access-7jdpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.972656 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" (UID: "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:17 crc kubenswrapper[4606]: I0123 17:14:17.987732 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" (UID: "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.004001 4606 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.004054 4606 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.004068 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jdpk\" (UniqueName: \"kubernetes.io/projected/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-kube-api-access-7jdpk\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.007248 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" (UID: "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.017618 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-config" (OuterVolumeSpecName: "config") pod "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" (UID: "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.029304 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" (UID: "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.037452 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" (UID: "4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.074363 4606 generic.go:334] "Generic (PLEG): container finished" podID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerID="43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f" exitCode=0 Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.074413 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" event={"ID":"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617","Type":"ContainerDied","Data":"43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f"} Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.074445 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" event={"ID":"4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617","Type":"ContainerDied","Data":"d69bce6371cd946941b1f9fc08bf5481bd06d45e83ead7358d211e61c41083d4"} Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.074465 4606 scope.go:117] "RemoveContainer" containerID="43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.074636 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nxsb5" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.099571 4606 scope.go:117] "RemoveContainer" containerID="96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.115854 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nxsb5"] Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.123360 4606 scope.go:117] "RemoveContainer" containerID="43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f" Jan 23 17:14:18 crc kubenswrapper[4606]: E0123 17:14:18.127065 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f\": container with ID starting with 43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f not found: ID does not exist" containerID="43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.127110 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f"} err="failed to get container status \"43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f\": rpc error: code = NotFound desc = could not find container \"43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f\": container with ID starting with 43643f673cc8a221514419aba17339957729629ae244f917730d08b3293c1b7f not found: ID does not exist" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.127133 4606 scope.go:117] "RemoveContainer" containerID="96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6" Jan 23 17:14:18 crc kubenswrapper[4606]: E0123 17:14:18.132185 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6\": container with ID starting with 96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6 not found: ID does not exist" containerID="96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.132218 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6"} err="failed to get container status \"96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6\": rpc error: code = NotFound desc = could not find container \"96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6\": container with ID starting with 96c7c3a7f4fe5755f537bb229ece5906c5ea4f3132b45ad456d41e1ab66796f6 not found: ID does not exist" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.134394 4606 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.134430 4606 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-config\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.134440 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.134452 4606 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.136662 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nxsb5"] Jan 23 17:14:18 crc kubenswrapper[4606]: I0123 17:14:18.387153 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" path="/var/lib/kubelet/pods/4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617/volumes" Jan 23 17:14:30 crc kubenswrapper[4606]: I0123 17:14:30.181396 4606 generic.go:334] "Generic (PLEG): container finished" podID="486d7661-6513-4b14-9367-9d4ddb8e6d2c" containerID="f384d100a73fd64c345d5c2b4eb5bf92abab5dc24e0fa63e4ae28930ca597617" exitCode=0 Jan 23 17:14:30 crc kubenswrapper[4606]: I0123 17:14:30.181463 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"486d7661-6513-4b14-9367-9d4ddb8e6d2c","Type":"ContainerDied","Data":"f384d100a73fd64c345d5c2b4eb5bf92abab5dc24e0fa63e4ae28930ca597617"} Jan 23 17:14:31 crc kubenswrapper[4606]: I0123 17:14:31.193505 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"486d7661-6513-4b14-9367-9d4ddb8e6d2c","Type":"ContainerStarted","Data":"782036d3b42f7328ebed25aa50c590a5bf925608928f0c1d31137d189ce6e5a1"} Jan 23 17:14:31 crc kubenswrapper[4606]: I0123 17:14:31.194082 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 17:14:31 crc kubenswrapper[4606]: I0123 17:14:31.194950 4606 generic.go:334] "Generic (PLEG): container finished" podID="008d9c33-291f-4081-b30f-eb32178efcd0" containerID="15fc3d3922b3e2fba4d67c5976dff42875a130630e10f071af1f45cfe446311a" exitCode=0 Jan 23 17:14:31 crc kubenswrapper[4606]: I0123 17:14:31.194997 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"008d9c33-291f-4081-b30f-eb32178efcd0","Type":"ContainerDied","Data":"15fc3d3922b3e2fba4d67c5976dff42875a130630e10f071af1f45cfe446311a"} Jan 23 17:14:31 crc kubenswrapper[4606]: I0123 17:14:31.231609 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.231576898 podStartE2EDuration="36.231576898s" podCreationTimestamp="2026-01-23 17:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:14:31.220270789 +0000 UTC m=+1305.220463205" watchObservedRunningTime="2026-01-23 17:14:31.231576898 +0000 UTC m=+1305.231769304" Jan 23 17:14:32 crc kubenswrapper[4606]: I0123 17:14:32.204790 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"008d9c33-291f-4081-b30f-eb32178efcd0","Type":"ContainerStarted","Data":"707a24c993aad4e2128507c6f76b609a9bc22c1371a0ec9129d31b871e01316d"} Jan 23 17:14:32 crc kubenswrapper[4606]: I0123 17:14:32.205731 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:14:32 crc kubenswrapper[4606]: I0123 17:14:32.236289 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.236265434 podStartE2EDuration="37.236265434s" podCreationTimestamp="2026-01-23 17:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:14:32.227967117 +0000 UTC m=+1306.228159543" watchObservedRunningTime="2026-01-23 17:14:32.236265434 +0000 UTC m=+1306.236457840" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.897651 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f"] Jan 23 17:14:35 crc kubenswrapper[4606]: E0123 17:14:35.898478 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerName="init" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.898496 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerName="init" Jan 23 17:14:35 crc kubenswrapper[4606]: E0123 17:14:35.898510 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9088f411-fc76-423e-861f-9a6f26c44b25" containerName="dnsmasq-dns" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.898518 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9088f411-fc76-423e-861f-9a6f26c44b25" containerName="dnsmasq-dns" Jan 23 17:14:35 crc kubenswrapper[4606]: E0123 17:14:35.898537 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerName="dnsmasq-dns" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.898545 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerName="dnsmasq-dns" Jan 23 17:14:35 crc kubenswrapper[4606]: E0123 17:14:35.898579 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9088f411-fc76-423e-861f-9a6f26c44b25" containerName="init" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.898606 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9088f411-fc76-423e-861f-9a6f26c44b25" containerName="init" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.898834 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="9088f411-fc76-423e-861f-9a6f26c44b25" containerName="dnsmasq-dns" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.898878 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9bb4e0-6228-4bc3-b549-9f5d0ebd5617" containerName="dnsmasq-dns" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.899641 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.904975 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.905513 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.906221 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.913091 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f"] Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.918980 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.974053 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz42\" (UniqueName: \"kubernetes.io/projected/d8804eac-7ae1-4c53-898e-858547a88346-kube-api-access-ldz42\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.974093 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.974122 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:35 crc kubenswrapper[4606]: I0123 17:14:35.974173 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.075268 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.075699 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldz42\" (UniqueName: \"kubernetes.io/projected/d8804eac-7ae1-4c53-898e-858547a88346-kube-api-access-ldz42\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.075719 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.075748 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.084719 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.086440 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.086961 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.098212 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldz42\" (UniqueName: \"kubernetes.io/projected/d8804eac-7ae1-4c53-898e-858547a88346-kube-api-access-ldz42\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.222407 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:14:36 crc kubenswrapper[4606]: I0123 17:14:36.975640 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f"] Jan 23 17:14:37 crc kubenswrapper[4606]: I0123 17:14:37.261235 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" event={"ID":"d8804eac-7ae1-4c53-898e-858547a88346","Type":"ContainerStarted","Data":"49800e74fe330eab3f0cb359dd293dfff9ecc5a60c7829a0c343b9c77ac430af"} Jan 23 17:14:45 crc kubenswrapper[4606]: I0123 17:14:45.982856 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 17:14:46 crc kubenswrapper[4606]: I0123 17:14:46.169768 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 17:14:49 crc kubenswrapper[4606]: I0123 17:14:49.376120 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:14:50 crc kubenswrapper[4606]: I0123 17:14:50.435776 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" event={"ID":"d8804eac-7ae1-4c53-898e-858547a88346","Type":"ContainerStarted","Data":"df25bbc0898eeff473059bb7db73e87de0f4fb0b7c08407ff39785ef1d834750"} Jan 23 17:14:50 crc kubenswrapper[4606]: I0123 17:14:50.458409 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" podStartSLOduration=3.05101186 podStartE2EDuration="15.458388419s" podCreationTimestamp="2026-01-23 17:14:35 +0000 UTC" firstStartedPulling="2026-01-23 17:14:36.965194389 +0000 UTC m=+1310.965386795" lastFinishedPulling="2026-01-23 17:14:49.372570948 +0000 UTC m=+1323.372763354" observedRunningTime="2026-01-23 17:14:50.451722602 +0000 UTC m=+1324.451915008" watchObservedRunningTime="2026-01-23 17:14:50.458388419 +0000 UTC m=+1324.458580825" Jan 23 17:14:59 crc kubenswrapper[4606]: I0123 17:14:59.905182 4606 scope.go:117] "RemoveContainer" containerID="736cdaa555a5cc9faa3cb9b660727e9466e9f91be9a63c1da22acb0883ab9610" Jan 23 17:14:59 crc kubenswrapper[4606]: I0123 17:14:59.949614 4606 scope.go:117] "RemoveContainer" containerID="c8329878e1a94e0509704b2984f0fcf495c9f76aee2d2dae8a49da98fe654c1a" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.143507 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv"] Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.144990 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.147265 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.148097 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.155090 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv"] Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.239011 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fd7093e-6417-481c-886a-aeaa72d6e538-secret-volume\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.239108 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fd7093e-6417-481c-886a-aeaa72d6e538-config-volume\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.239171 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57b6\" (UniqueName: \"kubernetes.io/projected/2fd7093e-6417-481c-886a-aeaa72d6e538-kube-api-access-q57b6\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.340726 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fd7093e-6417-481c-886a-aeaa72d6e538-secret-volume\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.340825 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fd7093e-6417-481c-886a-aeaa72d6e538-config-volume\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.340851 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57b6\" (UniqueName: \"kubernetes.io/projected/2fd7093e-6417-481c-886a-aeaa72d6e538-kube-api-access-q57b6\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.341863 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fd7093e-6417-481c-886a-aeaa72d6e538-config-volume\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.348365 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fd7093e-6417-481c-886a-aeaa72d6e538-secret-volume\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.357407 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57b6\" (UniqueName: \"kubernetes.io/projected/2fd7093e-6417-481c-886a-aeaa72d6e538-kube-api-access-q57b6\") pod \"collect-profiles-29486475-859tv\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.472370 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:00 crc kubenswrapper[4606]: I0123 17:15:00.946778 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv"] Jan 23 17:15:01 crc kubenswrapper[4606]: I0123 17:15:01.539082 4606 generic.go:334] "Generic (PLEG): container finished" podID="2fd7093e-6417-481c-886a-aeaa72d6e538" containerID="116d4aa9596f3d7e135b4a0104ce8c336f5cf63bbabd2126789e258b69d58f12" exitCode=0 Jan 23 17:15:01 crc kubenswrapper[4606]: I0123 17:15:01.539689 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" event={"ID":"2fd7093e-6417-481c-886a-aeaa72d6e538","Type":"ContainerDied","Data":"116d4aa9596f3d7e135b4a0104ce8c336f5cf63bbabd2126789e258b69d58f12"} Jan 23 17:15:01 crc kubenswrapper[4606]: I0123 17:15:01.540171 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" event={"ID":"2fd7093e-6417-481c-886a-aeaa72d6e538","Type":"ContainerStarted","Data":"1480c4a4e16de8669a453c5f3f2549869f0a7139accd89ee0d960791e06d9a37"} Jan 23 17:15:01 crc kubenswrapper[4606]: I0123 17:15:01.542346 4606 generic.go:334] "Generic (PLEG): container finished" podID="d8804eac-7ae1-4c53-898e-858547a88346" containerID="df25bbc0898eeff473059bb7db73e87de0f4fb0b7c08407ff39785ef1d834750" exitCode=0 Jan 23 17:15:01 crc kubenswrapper[4606]: I0123 17:15:01.542385 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" event={"ID":"d8804eac-7ae1-4c53-898e-858547a88346","Type":"ContainerDied","Data":"df25bbc0898eeff473059bb7db73e87de0f4fb0b7c08407ff39785ef1d834750"} Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.105346 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.200915 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fd7093e-6417-481c-886a-aeaa72d6e538-secret-volume\") pod \"2fd7093e-6417-481c-886a-aeaa72d6e538\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.201296 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fd7093e-6417-481c-886a-aeaa72d6e538-config-volume\") pod \"2fd7093e-6417-481c-886a-aeaa72d6e538\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.201417 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q57b6\" (UniqueName: \"kubernetes.io/projected/2fd7093e-6417-481c-886a-aeaa72d6e538-kube-api-access-q57b6\") pod \"2fd7093e-6417-481c-886a-aeaa72d6e538\" (UID: \"2fd7093e-6417-481c-886a-aeaa72d6e538\") " Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.202931 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd7093e-6417-481c-886a-aeaa72d6e538-config-volume" (OuterVolumeSpecName: "config-volume") pod "2fd7093e-6417-481c-886a-aeaa72d6e538" (UID: "2fd7093e-6417-481c-886a-aeaa72d6e538"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.208996 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd7093e-6417-481c-886a-aeaa72d6e538-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2fd7093e-6417-481c-886a-aeaa72d6e538" (UID: "2fd7093e-6417-481c-886a-aeaa72d6e538"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.209249 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd7093e-6417-481c-886a-aeaa72d6e538-kube-api-access-q57b6" (OuterVolumeSpecName: "kube-api-access-q57b6") pod "2fd7093e-6417-481c-886a-aeaa72d6e538" (UID: "2fd7093e-6417-481c-886a-aeaa72d6e538"). InnerVolumeSpecName "kube-api-access-q57b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.274115 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.303355 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2fd7093e-6417-481c-886a-aeaa72d6e538-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.303395 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fd7093e-6417-481c-886a-aeaa72d6e538-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.303407 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q57b6\" (UniqueName: \"kubernetes.io/projected/2fd7093e-6417-481c-886a-aeaa72d6e538-kube-api-access-q57b6\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.404498 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-inventory\") pod \"d8804eac-7ae1-4c53-898e-858547a88346\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.404629 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldz42\" (UniqueName: \"kubernetes.io/projected/d8804eac-7ae1-4c53-898e-858547a88346-kube-api-access-ldz42\") pod \"d8804eac-7ae1-4c53-898e-858547a88346\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.404663 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-ssh-key-openstack-edpm-ipam\") pod \"d8804eac-7ae1-4c53-898e-858547a88346\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.404796 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-repo-setup-combined-ca-bundle\") pod \"d8804eac-7ae1-4c53-898e-858547a88346\" (UID: \"d8804eac-7ae1-4c53-898e-858547a88346\") " Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.409499 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8804eac-7ae1-4c53-898e-858547a88346-kube-api-access-ldz42" (OuterVolumeSpecName: "kube-api-access-ldz42") pod "d8804eac-7ae1-4c53-898e-858547a88346" (UID: "d8804eac-7ae1-4c53-898e-858547a88346"). InnerVolumeSpecName "kube-api-access-ldz42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.410276 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d8804eac-7ae1-4c53-898e-858547a88346" (UID: "d8804eac-7ae1-4c53-898e-858547a88346"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.436792 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-inventory" (OuterVolumeSpecName: "inventory") pod "d8804eac-7ae1-4c53-898e-858547a88346" (UID: "d8804eac-7ae1-4c53-898e-858547a88346"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.440327 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d8804eac-7ae1-4c53-898e-858547a88346" (UID: "d8804eac-7ae1-4c53-898e-858547a88346"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.507483 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.507510 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldz42\" (UniqueName: \"kubernetes.io/projected/d8804eac-7ae1-4c53-898e-858547a88346-kube-api-access-ldz42\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.507521 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.507529 4606 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8804eac-7ae1-4c53-898e-858547a88346-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.565370 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" event={"ID":"d8804eac-7ae1-4c53-898e-858547a88346","Type":"ContainerDied","Data":"49800e74fe330eab3f0cb359dd293dfff9ecc5a60c7829a0c343b9c77ac430af"} Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.565408 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49800e74fe330eab3f0cb359dd293dfff9ecc5a60c7829a0c343b9c77ac430af" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.565459 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.575510 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" event={"ID":"2fd7093e-6417-481c-886a-aeaa72d6e538","Type":"ContainerDied","Data":"1480c4a4e16de8669a453c5f3f2549869f0a7139accd89ee0d960791e06d9a37"} Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.575555 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1480c4a4e16de8669a453c5f3f2549869f0a7139accd89ee0d960791e06d9a37" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.575644 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.680894 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m"] Jan 23 17:15:03 crc kubenswrapper[4606]: E0123 17:15:03.681387 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd7093e-6417-481c-886a-aeaa72d6e538" containerName="collect-profiles" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.681412 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd7093e-6417-481c-886a-aeaa72d6e538" containerName="collect-profiles" Jan 23 17:15:03 crc kubenswrapper[4606]: E0123 17:15:03.681455 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8804eac-7ae1-4c53-898e-858547a88346" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.681467 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8804eac-7ae1-4c53-898e-858547a88346" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.681721 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd7093e-6417-481c-886a-aeaa72d6e538" containerName="collect-profiles" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.681745 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8804eac-7ae1-4c53-898e-858547a88346" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.682350 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.684357 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.684577 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.684980 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.685203 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.737905 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m"] Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.814284 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.814420 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.814519 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln4x9\" (UniqueName: \"kubernetes.io/projected/7f98f049-04dc-4e56-9351-da168167af83-kube-api-access-ln4x9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.916026 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.916402 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln4x9\" (UniqueName: \"kubernetes.io/projected/7f98f049-04dc-4e56-9351-da168167af83-kube-api-access-ln4x9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.916662 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.920655 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.922489 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:03 crc kubenswrapper[4606]: I0123 17:15:03.937153 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln4x9\" (UniqueName: \"kubernetes.io/projected/7f98f049-04dc-4e56-9351-da168167af83-kube-api-access-ln4x9\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mv75m\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:04 crc kubenswrapper[4606]: I0123 17:15:04.005172 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:04 crc kubenswrapper[4606]: I0123 17:15:04.562631 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m"] Jan 23 17:15:04 crc kubenswrapper[4606]: W0123 17:15:04.564308 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f98f049_04dc_4e56_9351_da168167af83.slice/crio-2f9433111f004cc01b8ca97b528b71f9e97a740e6cff79929adfbd70a4c67985 WatchSource:0}: Error finding container 2f9433111f004cc01b8ca97b528b71f9e97a740e6cff79929adfbd70a4c67985: Status 404 returned error can't find the container with id 2f9433111f004cc01b8ca97b528b71f9e97a740e6cff79929adfbd70a4c67985 Jan 23 17:15:04 crc kubenswrapper[4606]: I0123 17:15:04.584389 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" event={"ID":"7f98f049-04dc-4e56-9351-da168167af83","Type":"ContainerStarted","Data":"2f9433111f004cc01b8ca97b528b71f9e97a740e6cff79929adfbd70a4c67985"} Jan 23 17:15:05 crc kubenswrapper[4606]: I0123 17:15:05.593748 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" event={"ID":"7f98f049-04dc-4e56-9351-da168167af83","Type":"ContainerStarted","Data":"9f817a816e928b176353e8912c9be7722f0adb0cea809a8a727658863465f698"} Jan 23 17:15:05 crc kubenswrapper[4606]: I0123 17:15:05.610277 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" podStartSLOduration=2.111184182 podStartE2EDuration="2.610262733s" podCreationTimestamp="2026-01-23 17:15:03 +0000 UTC" firstStartedPulling="2026-01-23 17:15:04.565859364 +0000 UTC m=+1338.566051770" lastFinishedPulling="2026-01-23 17:15:05.064937915 +0000 UTC m=+1339.065130321" observedRunningTime="2026-01-23 17:15:05.607085958 +0000 UTC m=+1339.607278364" watchObservedRunningTime="2026-01-23 17:15:05.610262733 +0000 UTC m=+1339.610455139" Jan 23 17:15:08 crc kubenswrapper[4606]: I0123 17:15:08.621244 4606 generic.go:334] "Generic (PLEG): container finished" podID="7f98f049-04dc-4e56-9351-da168167af83" containerID="9f817a816e928b176353e8912c9be7722f0adb0cea809a8a727658863465f698" exitCode=0 Jan 23 17:15:08 crc kubenswrapper[4606]: I0123 17:15:08.621529 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" event={"ID":"7f98f049-04dc-4e56-9351-da168167af83","Type":"ContainerDied","Data":"9f817a816e928b176353e8912c9be7722f0adb0cea809a8a727658863465f698"} Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.093509 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.246179 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln4x9\" (UniqueName: \"kubernetes.io/projected/7f98f049-04dc-4e56-9351-da168167af83-kube-api-access-ln4x9\") pod \"7f98f049-04dc-4e56-9351-da168167af83\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.246328 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-inventory\") pod \"7f98f049-04dc-4e56-9351-da168167af83\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.246372 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-ssh-key-openstack-edpm-ipam\") pod \"7f98f049-04dc-4e56-9351-da168167af83\" (UID: \"7f98f049-04dc-4e56-9351-da168167af83\") " Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.252080 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f98f049-04dc-4e56-9351-da168167af83-kube-api-access-ln4x9" (OuterVolumeSpecName: "kube-api-access-ln4x9") pod "7f98f049-04dc-4e56-9351-da168167af83" (UID: "7f98f049-04dc-4e56-9351-da168167af83"). InnerVolumeSpecName "kube-api-access-ln4x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.292686 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f98f049-04dc-4e56-9351-da168167af83" (UID: "7f98f049-04dc-4e56-9351-da168167af83"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.322876 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-inventory" (OuterVolumeSpecName: "inventory") pod "7f98f049-04dc-4e56-9351-da168167af83" (UID: "7f98f049-04dc-4e56-9351-da168167af83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.348634 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.348675 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln4x9\" (UniqueName: \"kubernetes.io/projected/7f98f049-04dc-4e56-9351-da168167af83-kube-api-access-ln4x9\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.348692 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f98f049-04dc-4e56-9351-da168167af83-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.644887 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" event={"ID":"7f98f049-04dc-4e56-9351-da168167af83","Type":"ContainerDied","Data":"2f9433111f004cc01b8ca97b528b71f9e97a740e6cff79929adfbd70a4c67985"} Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.644928 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f9433111f004cc01b8ca97b528b71f9e97a740e6cff79929adfbd70a4c67985" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.644983 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mv75m" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.748853 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh"] Jan 23 17:15:10 crc kubenswrapper[4606]: E0123 17:15:10.749265 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f98f049-04dc-4e56-9351-da168167af83" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.749279 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f98f049-04dc-4e56-9351-da168167af83" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.749506 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f98f049-04dc-4e56-9351-da168167af83" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.750235 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.759304 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.759553 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.759743 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.759764 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.781863 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh"] Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.858119 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdg7\" (UniqueName: \"kubernetes.io/projected/128354f4-3952-483b-b36b-0e21d1462038-kube-api-access-5cdg7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.858467 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.858500 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.858536 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.960725 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cdg7\" (UniqueName: \"kubernetes.io/projected/128354f4-3952-483b-b36b-0e21d1462038-kube-api-access-5cdg7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.960849 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.960888 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.960921 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.965703 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.966335 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.972716 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:10 crc kubenswrapper[4606]: I0123 17:15:10.978787 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cdg7\" (UniqueName: \"kubernetes.io/projected/128354f4-3952-483b-b36b-0e21d1462038-kube-api-access-5cdg7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:11 crc kubenswrapper[4606]: I0123 17:15:11.086214 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:15:11 crc kubenswrapper[4606]: I0123 17:15:11.423798 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh"] Jan 23 17:15:11 crc kubenswrapper[4606]: I0123 17:15:11.656635 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" event={"ID":"128354f4-3952-483b-b36b-0e21d1462038","Type":"ContainerStarted","Data":"ea1099c635126984743bc8aa50d6f90d31ddccf5214e19a318a82a1a48bacd33"} Jan 23 17:15:12 crc kubenswrapper[4606]: I0123 17:15:12.666020 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" event={"ID":"128354f4-3952-483b-b36b-0e21d1462038","Type":"ContainerStarted","Data":"74a87ec17fabd4b1e53fe6f275e6b3c50f2dbead57d6516a1f44e86f53ecc925"} Jan 23 17:15:12 crc kubenswrapper[4606]: I0123 17:15:12.684152 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" podStartSLOduration=2.250585714 podStartE2EDuration="2.684136568s" podCreationTimestamp="2026-01-23 17:15:10 +0000 UTC" firstStartedPulling="2026-01-23 17:15:11.429129389 +0000 UTC m=+1345.429321785" lastFinishedPulling="2026-01-23 17:15:11.862680233 +0000 UTC m=+1345.862872639" observedRunningTime="2026-01-23 17:15:12.681508075 +0000 UTC m=+1346.681700481" watchObservedRunningTime="2026-01-23 17:15:12.684136568 +0000 UTC m=+1346.684328974" Jan 23 17:16:00 crc kubenswrapper[4606]: I0123 17:16:00.078267 4606 scope.go:117] "RemoveContainer" containerID="cce42ffa99cfa6746f3fe53d9751dd176d42f3b1f3eb90e8bb8736ceda609654" Jan 23 17:16:00 crc kubenswrapper[4606]: I0123 17:16:00.114479 4606 scope.go:117] "RemoveContainer" containerID="a7070e36597fcdf8aa3834affbe73a3753b346f31476a5736d178e186a804eb1" Jan 23 17:16:00 crc kubenswrapper[4606]: I0123 17:16:00.158299 4606 scope.go:117] "RemoveContainer" containerID="fabbf875f263e69f9bbbb036f5aa4cebc8984f405c2b1b881e914d40fcc02082" Jan 23 17:16:29 crc kubenswrapper[4606]: I0123 17:16:29.934181 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:16:29 crc kubenswrapper[4606]: I0123 17:16:29.935974 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:16:59 crc kubenswrapper[4606]: I0123 17:16:59.934169 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:16:59 crc kubenswrapper[4606]: I0123 17:16:59.934735 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:17:00 crc kubenswrapper[4606]: I0123 17:17:00.254251 4606 scope.go:117] "RemoveContainer" containerID="e90a8d5e00adaf8d666cda264fef91b8943b952d0cc9298acd9c7b164f5b85c2" Jan 23 17:17:29 crc kubenswrapper[4606]: I0123 17:17:29.933433 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:17:29 crc kubenswrapper[4606]: I0123 17:17:29.933935 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:17:29 crc kubenswrapper[4606]: I0123 17:17:29.933977 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:17:29 crc kubenswrapper[4606]: I0123 17:17:29.934803 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebadbc612659691d0364633624880b271d3a98bbea2b5aa4b6e90c782af998b4"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:17:29 crc kubenswrapper[4606]: I0123 17:17:29.934873 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://ebadbc612659691d0364633624880b271d3a98bbea2b5aa4b6e90c782af998b4" gracePeriod=600 Jan 23 17:17:30 crc kubenswrapper[4606]: I0123 17:17:30.954521 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="ebadbc612659691d0364633624880b271d3a98bbea2b5aa4b6e90c782af998b4" exitCode=0 Jan 23 17:17:30 crc kubenswrapper[4606]: I0123 17:17:30.954555 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"ebadbc612659691d0364633624880b271d3a98bbea2b5aa4b6e90c782af998b4"} Jan 23 17:17:30 crc kubenswrapper[4606]: I0123 17:17:30.954834 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12"} Jan 23 17:17:30 crc kubenswrapper[4606]: I0123 17:17:30.954854 4606 scope.go:117] "RemoveContainer" containerID="c0aff6847a831032f0ede7590ffa543c8a1f938be08add052c2862cc2b6c92af" Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.048985 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4427-account-create-update-qjx9w"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.061942 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qqlz8"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.078705 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-btl5s"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.086955 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qqlz8"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.095866 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-btl5s"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.103687 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4427-account-create-update-qjx9w"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.111953 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5106-account-create-update-5hqcd"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.120112 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5106-account-create-update-5hqcd"] Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.377722 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa4f942-7d08-468f-b85d-da2c3e2f9538" path="/var/lib/kubelet/pods/4fa4f942-7d08-468f-b85d-da2c3e2f9538/volumes" Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.378615 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f83912b-7bd4-49dd-9584-4ea66a2405e2" path="/var/lib/kubelet/pods/7f83912b-7bd4-49dd-9584-4ea66a2405e2/volumes" Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.379778 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf" path="/var/lib/kubelet/pods/dc9bdbca-64c9-4e24-9ca0-79a79ee4d9bf/volumes" Jan 23 17:18:38 crc kubenswrapper[4606]: I0123 17:18:38.380465 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37c097b-95e1-44fc-970a-a12f009ff813" path="/var/lib/kubelet/pods/f37c097b-95e1-44fc-970a-a12f009ff813/volumes" Jan 23 17:18:39 crc kubenswrapper[4606]: I0123 17:18:39.036487 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-457lj"] Jan 23 17:18:39 crc kubenswrapper[4606]: I0123 17:18:39.050105 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9e2b-account-create-update-fqlvx"] Jan 23 17:18:39 crc kubenswrapper[4606]: I0123 17:18:39.061968 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9e2b-account-create-update-fqlvx"] Jan 23 17:18:39 crc kubenswrapper[4606]: I0123 17:18:39.069388 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-457lj"] Jan 23 17:18:40 crc kubenswrapper[4606]: I0123 17:18:40.381741 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18804b6a-aa2a-48d6-a27d-16e4e55772c8" path="/var/lib/kubelet/pods/18804b6a-aa2a-48d6-a27d-16e4e55772c8/volumes" Jan 23 17:18:40 crc kubenswrapper[4606]: I0123 17:18:40.383190 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5e8555-29bb-44bc-a6ea-57cb81e8df4d" path="/var/lib/kubelet/pods/9d5e8555-29bb-44bc-a6ea-57cb81e8df4d/volumes" Jan 23 17:18:43 crc kubenswrapper[4606]: I0123 17:18:43.969948 4606 generic.go:334] "Generic (PLEG): container finished" podID="128354f4-3952-483b-b36b-0e21d1462038" containerID="74a87ec17fabd4b1e53fe6f275e6b3c50f2dbead57d6516a1f44e86f53ecc925" exitCode=0 Jan 23 17:18:43 crc kubenswrapper[4606]: I0123 17:18:43.970017 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" event={"ID":"128354f4-3952-483b-b36b-0e21d1462038","Type":"ContainerDied","Data":"74a87ec17fabd4b1e53fe6f275e6b3c50f2dbead57d6516a1f44e86f53ecc925"} Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.033105 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ldlc4"] Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.035039 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.048733 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldlc4"] Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.138061 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-utilities\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.138357 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-catalog-content\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.138446 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7std4\" (UniqueName: \"kubernetes.io/projected/a11ad634-8ab5-421a-bed0-2e9d4676209b-kube-api-access-7std4\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.240164 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-catalog-content\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.240504 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7std4\" (UniqueName: \"kubernetes.io/projected/a11ad634-8ab5-421a-bed0-2e9d4676209b-kube-api-access-7std4\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.240838 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-utilities\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.241281 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-catalog-content\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.241534 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-utilities\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.262340 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7std4\" (UniqueName: \"kubernetes.io/projected/a11ad634-8ab5-421a-bed0-2e9d4676209b-kube-api-access-7std4\") pod \"certified-operators-ldlc4\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.375706 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.951113 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldlc4"] Jan 23 17:18:44 crc kubenswrapper[4606]: I0123 17:18:44.994313 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldlc4" event={"ID":"a11ad634-8ab5-421a-bed0-2e9d4676209b","Type":"ContainerStarted","Data":"9b44853fedd3a8910e48e744739fd66649abcd3d3d3bef94064a61ee20c845ed"} Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.701879 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.780331 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cdg7\" (UniqueName: \"kubernetes.io/projected/128354f4-3952-483b-b36b-0e21d1462038-kube-api-access-5cdg7\") pod \"128354f4-3952-483b-b36b-0e21d1462038\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.780747 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-bootstrap-combined-ca-bundle\") pod \"128354f4-3952-483b-b36b-0e21d1462038\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.780921 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-inventory\") pod \"128354f4-3952-483b-b36b-0e21d1462038\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.781060 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-ssh-key-openstack-edpm-ipam\") pod \"128354f4-3952-483b-b36b-0e21d1462038\" (UID: \"128354f4-3952-483b-b36b-0e21d1462038\") " Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.787050 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "128354f4-3952-483b-b36b-0e21d1462038" (UID: "128354f4-3952-483b-b36b-0e21d1462038"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.819289 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128354f4-3952-483b-b36b-0e21d1462038-kube-api-access-5cdg7" (OuterVolumeSpecName: "kube-api-access-5cdg7") pod "128354f4-3952-483b-b36b-0e21d1462038" (UID: "128354f4-3952-483b-b36b-0e21d1462038"). InnerVolumeSpecName "kube-api-access-5cdg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.836504 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-inventory" (OuterVolumeSpecName: "inventory") pod "128354f4-3952-483b-b36b-0e21d1462038" (UID: "128354f4-3952-483b-b36b-0e21d1462038"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.863329 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "128354f4-3952-483b-b36b-0e21d1462038" (UID: "128354f4-3952-483b-b36b-0e21d1462038"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.883774 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cdg7\" (UniqueName: \"kubernetes.io/projected/128354f4-3952-483b-b36b-0e21d1462038-kube-api-access-5cdg7\") on node \"crc\" DevicePath \"\"" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.883822 4606 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.883834 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:18:45 crc kubenswrapper[4606]: I0123 17:18:45.883844 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/128354f4-3952-483b-b36b-0e21d1462038-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.004390 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" event={"ID":"128354f4-3952-483b-b36b-0e21d1462038","Type":"ContainerDied","Data":"ea1099c635126984743bc8aa50d6f90d31ddccf5214e19a318a82a1a48bacd33"} Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.004417 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.004428 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea1099c635126984743bc8aa50d6f90d31ddccf5214e19a318a82a1a48bacd33" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.007336 4606 generic.go:334] "Generic (PLEG): container finished" podID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerID="485eefa8c6fe96bcc31303d0c1e92c185edd1089c5c4198f4d546d1b9de27ec5" exitCode=0 Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.007372 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldlc4" event={"ID":"a11ad634-8ab5-421a-bed0-2e9d4676209b","Type":"ContainerDied","Data":"485eefa8c6fe96bcc31303d0c1e92c185edd1089c5c4198f4d546d1b9de27ec5"} Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.009407 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.103500 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf"] Jan 23 17:18:46 crc kubenswrapper[4606]: E0123 17:18:46.104017 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128354f4-3952-483b-b36b-0e21d1462038" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.104049 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="128354f4-3952-483b-b36b-0e21d1462038" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.104316 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="128354f4-3952-483b-b36b-0e21d1462038" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.105029 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.117908 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.118404 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.119704 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.124287 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.130135 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf"] Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.195300 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.195770 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgm9\" (UniqueName: \"kubernetes.io/projected/c268f2de-72d2-457d-b7e4-192af4f43fb8-kube-api-access-vpgm9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.196090 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.298393 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.298497 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.298556 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgm9\" (UniqueName: \"kubernetes.io/projected/c268f2de-72d2-457d-b7e4-192af4f43fb8-kube-api-access-vpgm9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.307695 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.307763 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.322343 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgm9\" (UniqueName: \"kubernetes.io/projected/c268f2de-72d2-457d-b7e4-192af4f43fb8-kube-api-access-vpgm9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.426981 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:18:46 crc kubenswrapper[4606]: I0123 17:18:46.438667 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:18:47 crc kubenswrapper[4606]: I0123 17:18:47.927285 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf"] Jan 23 17:18:48 crc kubenswrapper[4606]: I0123 17:18:48.025087 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" event={"ID":"c268f2de-72d2-457d-b7e4-192af4f43fb8","Type":"ContainerStarted","Data":"d67a07aaec04851472010218a2e65ffc3242afb82ba0b416e7d3756e003daf1f"} Jan 23 17:18:48 crc kubenswrapper[4606]: I0123 17:18:48.026722 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldlc4" event={"ID":"a11ad634-8ab5-421a-bed0-2e9d4676209b","Type":"ContainerStarted","Data":"ad6defd3d7d0726e5a80110ad5acb9ce0c3a2894da769d90e73d9cc79bd5b18b"} Jan 23 17:18:48 crc kubenswrapper[4606]: I0123 17:18:48.404957 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:18:49 crc kubenswrapper[4606]: I0123 17:18:49.036267 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" event={"ID":"c268f2de-72d2-457d-b7e4-192af4f43fb8","Type":"ContainerStarted","Data":"3f7de9c60950726787720465719a63855bee5a254c49c63555390d4d48b27172"} Jan 23 17:18:49 crc kubenswrapper[4606]: I0123 17:18:49.038624 4606 generic.go:334] "Generic (PLEG): container finished" podID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerID="ad6defd3d7d0726e5a80110ad5acb9ce0c3a2894da769d90e73d9cc79bd5b18b" exitCode=0 Jan 23 17:18:49 crc kubenswrapper[4606]: I0123 17:18:49.038673 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldlc4" event={"ID":"a11ad634-8ab5-421a-bed0-2e9d4676209b","Type":"ContainerDied","Data":"ad6defd3d7d0726e5a80110ad5acb9ce0c3a2894da769d90e73d9cc79bd5b18b"} Jan 23 17:18:49 crc kubenswrapper[4606]: I0123 17:18:49.053476 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" podStartSLOduration=2.5887044599999998 podStartE2EDuration="3.053459342s" podCreationTimestamp="2026-01-23 17:18:46 +0000 UTC" firstStartedPulling="2026-01-23 17:18:47.937873791 +0000 UTC m=+1561.938066197" lastFinishedPulling="2026-01-23 17:18:48.402628673 +0000 UTC m=+1562.402821079" observedRunningTime="2026-01-23 17:18:49.052028579 +0000 UTC m=+1563.052220985" watchObservedRunningTime="2026-01-23 17:18:49.053459342 +0000 UTC m=+1563.053651748" Jan 23 17:18:50 crc kubenswrapper[4606]: I0123 17:18:50.049442 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldlc4" event={"ID":"a11ad634-8ab5-421a-bed0-2e9d4676209b","Type":"ContainerStarted","Data":"46628fd098e5b46818a1335db8f68471179d84db5db481369063438d405366bd"} Jan 23 17:18:50 crc kubenswrapper[4606]: I0123 17:18:50.087375 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ldlc4" podStartSLOduration=2.634224115 podStartE2EDuration="6.087356128s" podCreationTimestamp="2026-01-23 17:18:44 +0000 UTC" firstStartedPulling="2026-01-23 17:18:46.009194041 +0000 UTC m=+1560.009386447" lastFinishedPulling="2026-01-23 17:18:49.462326064 +0000 UTC m=+1563.462518460" observedRunningTime="2026-01-23 17:18:50.075035034 +0000 UTC m=+1564.075227450" watchObservedRunningTime="2026-01-23 17:18:50.087356128 +0000 UTC m=+1564.087548544" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.583230 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmsgh"] Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.585943 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.619174 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmsgh"] Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.654369 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw7q5\" (UniqueName: \"kubernetes.io/projected/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-kube-api-access-sw7q5\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.654412 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-catalog-content\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.654492 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-utilities\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.756093 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw7q5\" (UniqueName: \"kubernetes.io/projected/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-kube-api-access-sw7q5\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.756389 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-catalog-content\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.756447 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-utilities\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.757248 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-catalog-content\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.757294 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-utilities\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.786469 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw7q5\" (UniqueName: \"kubernetes.io/projected/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-kube-api-access-sw7q5\") pod \"redhat-operators-nmsgh\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:52 crc kubenswrapper[4606]: I0123 17:18:52.908554 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:18:53 crc kubenswrapper[4606]: I0123 17:18:53.497458 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmsgh"] Jan 23 17:18:54 crc kubenswrapper[4606]: I0123 17:18:54.115317 4606 generic.go:334] "Generic (PLEG): container finished" podID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerID="6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b" exitCode=0 Jan 23 17:18:54 crc kubenswrapper[4606]: I0123 17:18:54.115562 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmsgh" event={"ID":"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d","Type":"ContainerDied","Data":"6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b"} Jan 23 17:18:54 crc kubenswrapper[4606]: I0123 17:18:54.115868 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmsgh" event={"ID":"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d","Type":"ContainerStarted","Data":"9e1dc7a201c6fbf771414bba9f990c91101c03238f206209fcf721eedeb4cd45"} Jan 23 17:18:54 crc kubenswrapper[4606]: I0123 17:18:54.376126 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:54 crc kubenswrapper[4606]: I0123 17:18:54.376164 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:54 crc kubenswrapper[4606]: I0123 17:18:54.442019 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:55 crc kubenswrapper[4606]: I0123 17:18:55.125859 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmsgh" event={"ID":"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d","Type":"ContainerStarted","Data":"ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b"} Jan 23 17:18:55 crc kubenswrapper[4606]: I0123 17:18:55.194080 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:56 crc kubenswrapper[4606]: I0123 17:18:56.759750 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldlc4"] Jan 23 17:18:57 crc kubenswrapper[4606]: I0123 17:18:57.141621 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ldlc4" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="registry-server" containerID="cri-o://46628fd098e5b46818a1335db8f68471179d84db5db481369063438d405366bd" gracePeriod=2 Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.155991 4606 generic.go:334] "Generic (PLEG): container finished" podID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerID="46628fd098e5b46818a1335db8f68471179d84db5db481369063438d405366bd" exitCode=0 Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.156181 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldlc4" event={"ID":"a11ad634-8ab5-421a-bed0-2e9d4676209b","Type":"ContainerDied","Data":"46628fd098e5b46818a1335db8f68471179d84db5db481369063438d405366bd"} Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.435022 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.601932 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-utilities\") pod \"a11ad634-8ab5-421a-bed0-2e9d4676209b\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.601988 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7std4\" (UniqueName: \"kubernetes.io/projected/a11ad634-8ab5-421a-bed0-2e9d4676209b-kube-api-access-7std4\") pod \"a11ad634-8ab5-421a-bed0-2e9d4676209b\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.602133 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-catalog-content\") pod \"a11ad634-8ab5-421a-bed0-2e9d4676209b\" (UID: \"a11ad634-8ab5-421a-bed0-2e9d4676209b\") " Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.602778 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-utilities" (OuterVolumeSpecName: "utilities") pod "a11ad634-8ab5-421a-bed0-2e9d4676209b" (UID: "a11ad634-8ab5-421a-bed0-2e9d4676209b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.610153 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11ad634-8ab5-421a-bed0-2e9d4676209b-kube-api-access-7std4" (OuterVolumeSpecName: "kube-api-access-7std4") pod "a11ad634-8ab5-421a-bed0-2e9d4676209b" (UID: "a11ad634-8ab5-421a-bed0-2e9d4676209b"). InnerVolumeSpecName "kube-api-access-7std4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.648412 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a11ad634-8ab5-421a-bed0-2e9d4676209b" (UID: "a11ad634-8ab5-421a-bed0-2e9d4676209b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.705133 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.705190 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7std4\" (UniqueName: \"kubernetes.io/projected/a11ad634-8ab5-421a-bed0-2e9d4676209b-kube-api-access-7std4\") on node \"crc\" DevicePath \"\"" Jan 23 17:18:58 crc kubenswrapper[4606]: I0123 17:18:58.705212 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a11ad634-8ab5-421a-bed0-2e9d4676209b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.168965 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldlc4" event={"ID":"a11ad634-8ab5-421a-bed0-2e9d4676209b","Type":"ContainerDied","Data":"9b44853fedd3a8910e48e744739fd66649abcd3d3d3bef94064a61ee20c845ed"} Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.169028 4606 scope.go:117] "RemoveContainer" containerID="46628fd098e5b46818a1335db8f68471179d84db5db481369063438d405366bd" Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.169041 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldlc4" Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.171060 4606 generic.go:334] "Generic (PLEG): container finished" podID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerID="ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b" exitCode=0 Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.171103 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmsgh" event={"ID":"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d","Type":"ContainerDied","Data":"ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b"} Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.193249 4606 scope.go:117] "RemoveContainer" containerID="ad6defd3d7d0726e5a80110ad5acb9ce0c3a2894da769d90e73d9cc79bd5b18b" Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.223499 4606 scope.go:117] "RemoveContainer" containerID="485eefa8c6fe96bcc31303d0c1e92c185edd1089c5c4198f4d546d1b9de27ec5" Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.234467 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldlc4"] Jan 23 17:18:59 crc kubenswrapper[4606]: I0123 17:18:59.243305 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ldlc4"] Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.184566 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmsgh" event={"ID":"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d","Type":"ContainerStarted","Data":"655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f"} Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.204827 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmsgh" podStartSLOduration=2.611766599 podStartE2EDuration="8.20481081s" podCreationTimestamp="2026-01-23 17:18:52 +0000 UTC" firstStartedPulling="2026-01-23 17:18:54.11723307 +0000 UTC m=+1568.117425476" lastFinishedPulling="2026-01-23 17:18:59.710277281 +0000 UTC m=+1573.710469687" observedRunningTime="2026-01-23 17:19:00.199948097 +0000 UTC m=+1574.200140513" watchObservedRunningTime="2026-01-23 17:19:00.20481081 +0000 UTC m=+1574.205003216" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.376047 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" path="/var/lib/kubelet/pods/a11ad634-8ab5-421a-bed0-2e9d4676209b/volumes" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.501906 4606 scope.go:117] "RemoveContainer" containerID="e0cf457c58f3eb3910d1d31809dfc8d57ff9d3352906e28f3c4812f48b094a81" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.524761 4606 scope.go:117] "RemoveContainer" containerID="bb7f06b01e7e5e09d8e4e53028c4353da5e2342cb9885ad66f52c65d0d270548" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.579733 4606 scope.go:117] "RemoveContainer" containerID="df7eff4fb32434271a3179b69c570023bde89eabcf4fb741c3602715a1973513" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.630868 4606 scope.go:117] "RemoveContainer" containerID="3a2507c4212bf20231472ac983f6a351db9035bc564b6fdb674d5cea519a7d84" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.696301 4606 scope.go:117] "RemoveContainer" containerID="c23acda3913c6695422060f03a2bab72b64a63f464400a07869227cdf6a3812c" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.783042 4606 scope.go:117] "RemoveContainer" containerID="b053a1cca7853d500963d30d9d52ba513f27dc44de30d7766f3e887879dc16d3" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.830403 4606 scope.go:117] "RemoveContainer" containerID="b9deb28f85c4331045a754c0dc7ac2295114339c3e68181ecb457655349478af" Jan 23 17:19:00 crc kubenswrapper[4606]: I0123 17:19:00.857233 4606 scope.go:117] "RemoveContainer" containerID="cb7489f0a801d1f56f3056dd0bc4a1d84bdf237032db8c20e01d7e6d36da210c" Jan 23 17:19:02 crc kubenswrapper[4606]: I0123 17:19:02.908745 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:19:02 crc kubenswrapper[4606]: I0123 17:19:02.909507 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:19:03 crc kubenswrapper[4606]: I0123 17:19:03.957794 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nmsgh" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="registry-server" probeResult="failure" output=< Jan 23 17:19:03 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 17:19:03 crc kubenswrapper[4606]: > Jan 23 17:19:07 crc kubenswrapper[4606]: I0123 17:19:07.040515 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-72r74"] Jan 23 17:19:07 crc kubenswrapper[4606]: I0123 17:19:07.049533 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-72r74"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.039909 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mp29k"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.046633 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2hhhx"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.052871 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mp29k"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.059741 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-979e-account-create-update-xlx74"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.067651 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a05b-account-create-update-s2mk9"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.077949 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fa10-account-create-update-xmwcv"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.084813 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a05b-account-create-update-s2mk9"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.091184 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fa10-account-create-update-xmwcv"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.097668 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2hhhx"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.104177 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-979e-account-create-update-xlx74"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.112074 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zhbb7"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.120654 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zhbb7"] Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.379926 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224fe191-a024-4e50-8a8a-8b46aef66847" path="/var/lib/kubelet/pods/224fe191-a024-4e50-8a8a-8b46aef66847/volumes" Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.381349 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="414e99fb-9e21-431e-8296-e4f89723f459" path="/var/lib/kubelet/pods/414e99fb-9e21-431e-8296-e4f89723f459/volumes" Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.382522 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3ad36a-2eab-4985-beea-6483c3faae7d" path="/var/lib/kubelet/pods/4e3ad36a-2eab-4985-beea-6483c3faae7d/volumes" Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.383631 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67c3811-645f-4d09-b577-56a82395ec4c" path="/var/lib/kubelet/pods/b67c3811-645f-4d09-b577-56a82395ec4c/volumes" Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.385064 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6039461-4f70-4ddd-992f-778a1e7e6fb8" path="/var/lib/kubelet/pods/c6039461-4f70-4ddd-992f-778a1e7e6fb8/volumes" Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.386269 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d613fd23-da43-482f-945b-e72c58d9b691" path="/var/lib/kubelet/pods/d613fd23-da43-482f-945b-e72c58d9b691/volumes" Jan 23 17:19:08 crc kubenswrapper[4606]: I0123 17:19:08.387141 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79e7e45-faf1-4f97-a135-ce69d2bb7529" path="/var/lib/kubelet/pods/f79e7e45-faf1-4f97-a135-ce69d2bb7529/volumes" Jan 23 17:19:12 crc kubenswrapper[4606]: I0123 17:19:12.955437 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:19:13 crc kubenswrapper[4606]: I0123 17:19:13.020053 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:19:13 crc kubenswrapper[4606]: I0123 17:19:13.197820 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmsgh"] Jan 23 17:19:14 crc kubenswrapper[4606]: I0123 17:19:14.300505 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmsgh" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="registry-server" containerID="cri-o://655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f" gracePeriod=2 Jan 23 17:19:14 crc kubenswrapper[4606]: I0123 17:19:14.765382 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:19:14 crc kubenswrapper[4606]: I0123 17:19:14.920677 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-utilities\") pod \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " Jan 23 17:19:14 crc kubenswrapper[4606]: I0123 17:19:14.920785 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw7q5\" (UniqueName: \"kubernetes.io/projected/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-kube-api-access-sw7q5\") pod \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " Jan 23 17:19:14 crc kubenswrapper[4606]: I0123 17:19:14.920975 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-catalog-content\") pod \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\" (UID: \"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d\") " Jan 23 17:19:14 crc kubenswrapper[4606]: I0123 17:19:14.921930 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-utilities" (OuterVolumeSpecName: "utilities") pod "a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" (UID: "a4ba3e7d-36a2-448f-95b1-1884b0e34d6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:19:14 crc kubenswrapper[4606]: I0123 17:19:14.926775 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-kube-api-access-sw7q5" (OuterVolumeSpecName: "kube-api-access-sw7q5") pod "a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" (UID: "a4ba3e7d-36a2-448f-95b1-1884b0e34d6d"). InnerVolumeSpecName "kube-api-access-sw7q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.023032 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw7q5\" (UniqueName: \"kubernetes.io/projected/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-kube-api-access-sw7q5\") on node \"crc\" DevicePath \"\"" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.023083 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.050102 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" (UID: "a4ba3e7d-36a2-448f-95b1-1884b0e34d6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.124849 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.312051 4606 generic.go:334] "Generic (PLEG): container finished" podID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerID="655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f" exitCode=0 Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.312111 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmsgh" event={"ID":"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d","Type":"ContainerDied","Data":"655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f"} Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.312184 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmsgh" event={"ID":"a4ba3e7d-36a2-448f-95b1-1884b0e34d6d","Type":"ContainerDied","Data":"9e1dc7a201c6fbf771414bba9f990c91101c03238f206209fcf721eedeb4cd45"} Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.312207 4606 scope.go:117] "RemoveContainer" containerID="655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.313050 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmsgh" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.345062 4606 scope.go:117] "RemoveContainer" containerID="ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.347137 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmsgh"] Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.357400 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmsgh"] Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.378744 4606 scope.go:117] "RemoveContainer" containerID="6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.413976 4606 scope.go:117] "RemoveContainer" containerID="655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f" Jan 23 17:19:15 crc kubenswrapper[4606]: E0123 17:19:15.414460 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f\": container with ID starting with 655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f not found: ID does not exist" containerID="655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.414499 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f"} err="failed to get container status \"655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f\": rpc error: code = NotFound desc = could not find container \"655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f\": container with ID starting with 655b1e9fc79058eb7100b2ce906a2baec31b45a6b31ac4b286ccf967813ced9f not found: ID does not exist" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.414524 4606 scope.go:117] "RemoveContainer" containerID="ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b" Jan 23 17:19:15 crc kubenswrapper[4606]: E0123 17:19:15.414835 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b\": container with ID starting with ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b not found: ID does not exist" containerID="ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.414905 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b"} err="failed to get container status \"ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b\": rpc error: code = NotFound desc = could not find container \"ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b\": container with ID starting with ed64aec43a9f8b512c81fb696142f857606127f03a2d6cadc85b0ceda6eef90b not found: ID does not exist" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.414935 4606 scope.go:117] "RemoveContainer" containerID="6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b" Jan 23 17:19:15 crc kubenswrapper[4606]: E0123 17:19:15.415165 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b\": container with ID starting with 6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b not found: ID does not exist" containerID="6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b" Jan 23 17:19:15 crc kubenswrapper[4606]: I0123 17:19:15.415193 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b"} err="failed to get container status \"6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b\": rpc error: code = NotFound desc = could not find container \"6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b\": container with ID starting with 6024e0301d2cc171b02c6a308e2e178fa17268b3e03919b4e434539dc02d693b not found: ID does not exist" Jan 23 17:19:16 crc kubenswrapper[4606]: I0123 17:19:16.380417 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" path="/var/lib/kubelet/pods/a4ba3e7d-36a2-448f-95b1-1884b0e34d6d/volumes" Jan 23 17:19:17 crc kubenswrapper[4606]: I0123 17:19:17.034980 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qhnq4"] Jan 23 17:19:17 crc kubenswrapper[4606]: I0123 17:19:17.049125 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qhnq4"] Jan 23 17:19:18 crc kubenswrapper[4606]: I0123 17:19:18.377984 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c1c7ff-8437-4c18-ab38-7b6908fbe35c" path="/var/lib/kubelet/pods/21c1c7ff-8437-4c18-ab38-7b6908fbe35c/volumes" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.064448 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-684bn"] Jan 23 17:19:19 crc kubenswrapper[4606]: E0123 17:19:19.065148 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="registry-server" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065167 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="registry-server" Jan 23 17:19:19 crc kubenswrapper[4606]: E0123 17:19:19.065193 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="extract-content" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065199 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="extract-content" Jan 23 17:19:19 crc kubenswrapper[4606]: E0123 17:19:19.065215 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="registry-server" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065221 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="registry-server" Jan 23 17:19:19 crc kubenswrapper[4606]: E0123 17:19:19.065233 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="extract-content" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065239 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="extract-content" Jan 23 17:19:19 crc kubenswrapper[4606]: E0123 17:19:19.065250 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="extract-utilities" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065256 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="extract-utilities" Jan 23 17:19:19 crc kubenswrapper[4606]: E0123 17:19:19.065267 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="extract-utilities" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065273 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="extract-utilities" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065471 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11ad634-8ab5-421a-bed0-2e9d4676209b" containerName="registry-server" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.065493 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ba3e7d-36a2-448f-95b1-1884b0e34d6d" containerName="registry-server" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.066888 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.084018 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-684bn"] Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.205431 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-catalog-content\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.205497 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwd4\" (UniqueName: \"kubernetes.io/projected/12fce7e9-2d4b-4e58-b762-2db78a0747ab-kube-api-access-qmwd4\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.205574 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-utilities\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.307127 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-catalog-content\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.307188 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwd4\" (UniqueName: \"kubernetes.io/projected/12fce7e9-2d4b-4e58-b762-2db78a0747ab-kube-api-access-qmwd4\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.307284 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-utilities\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.307795 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-utilities\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.307795 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-catalog-content\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.326050 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwd4\" (UniqueName: \"kubernetes.io/projected/12fce7e9-2d4b-4e58-b762-2db78a0747ab-kube-api-access-qmwd4\") pod \"community-operators-684bn\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.405855 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:19 crc kubenswrapper[4606]: I0123 17:19:19.924421 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-684bn"] Jan 23 17:19:20 crc kubenswrapper[4606]: I0123 17:19:20.358430 4606 generic.go:334] "Generic (PLEG): container finished" podID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerID="053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97" exitCode=0 Jan 23 17:19:20 crc kubenswrapper[4606]: I0123 17:19:20.358487 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-684bn" event={"ID":"12fce7e9-2d4b-4e58-b762-2db78a0747ab","Type":"ContainerDied","Data":"053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97"} Jan 23 17:19:20 crc kubenswrapper[4606]: I0123 17:19:20.358695 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-684bn" event={"ID":"12fce7e9-2d4b-4e58-b762-2db78a0747ab","Type":"ContainerStarted","Data":"1ef287507b058a5fd35e1da7f7347402a74b2ebccf8f9c0c9b081289b72efe2c"} Jan 23 17:19:22 crc kubenswrapper[4606]: I0123 17:19:22.382602 4606 generic.go:334] "Generic (PLEG): container finished" podID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerID="ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429" exitCode=0 Jan 23 17:19:22 crc kubenswrapper[4606]: I0123 17:19:22.383106 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-684bn" event={"ID":"12fce7e9-2d4b-4e58-b762-2db78a0747ab","Type":"ContainerDied","Data":"ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429"} Jan 23 17:19:23 crc kubenswrapper[4606]: I0123 17:19:23.394458 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-684bn" event={"ID":"12fce7e9-2d4b-4e58-b762-2db78a0747ab","Type":"ContainerStarted","Data":"f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5"} Jan 23 17:19:23 crc kubenswrapper[4606]: I0123 17:19:23.424115 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-684bn" podStartSLOduration=1.997445803 podStartE2EDuration="4.424092531s" podCreationTimestamp="2026-01-23 17:19:19 +0000 UTC" firstStartedPulling="2026-01-23 17:19:20.361387635 +0000 UTC m=+1594.361580041" lastFinishedPulling="2026-01-23 17:19:22.788034363 +0000 UTC m=+1596.788226769" observedRunningTime="2026-01-23 17:19:23.416451205 +0000 UTC m=+1597.416643621" watchObservedRunningTime="2026-01-23 17:19:23.424092531 +0000 UTC m=+1597.424284947" Jan 23 17:19:29 crc kubenswrapper[4606]: I0123 17:19:29.406371 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:29 crc kubenswrapper[4606]: I0123 17:19:29.406824 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:29 crc kubenswrapper[4606]: I0123 17:19:29.457493 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:29 crc kubenswrapper[4606]: I0123 17:19:29.512935 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:29 crc kubenswrapper[4606]: I0123 17:19:29.710432 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-684bn"] Jan 23 17:19:31 crc kubenswrapper[4606]: I0123 17:19:31.470985 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-684bn" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="registry-server" containerID="cri-o://f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5" gracePeriod=2 Jan 23 17:19:31 crc kubenswrapper[4606]: I0123 17:19:31.964944 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:31 crc kubenswrapper[4606]: I0123 17:19:31.981688 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwd4\" (UniqueName: \"kubernetes.io/projected/12fce7e9-2d4b-4e58-b762-2db78a0747ab-kube-api-access-qmwd4\") pod \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " Jan 23 17:19:31 crc kubenswrapper[4606]: I0123 17:19:31.981866 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-utilities\") pod \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " Jan 23 17:19:31 crc kubenswrapper[4606]: I0123 17:19:31.981971 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-catalog-content\") pod \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\" (UID: \"12fce7e9-2d4b-4e58-b762-2db78a0747ab\") " Jan 23 17:19:31 crc kubenswrapper[4606]: I0123 17:19:31.982761 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-utilities" (OuterVolumeSpecName: "utilities") pod "12fce7e9-2d4b-4e58-b762-2db78a0747ab" (UID: "12fce7e9-2d4b-4e58-b762-2db78a0747ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:19:31 crc kubenswrapper[4606]: I0123 17:19:31.989891 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12fce7e9-2d4b-4e58-b762-2db78a0747ab-kube-api-access-qmwd4" (OuterVolumeSpecName: "kube-api-access-qmwd4") pod "12fce7e9-2d4b-4e58-b762-2db78a0747ab" (UID: "12fce7e9-2d4b-4e58-b762-2db78a0747ab"). InnerVolumeSpecName "kube-api-access-qmwd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.039781 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12fce7e9-2d4b-4e58-b762-2db78a0747ab" (UID: "12fce7e9-2d4b-4e58-b762-2db78a0747ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.083394 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.083427 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwd4\" (UniqueName: \"kubernetes.io/projected/12fce7e9-2d4b-4e58-b762-2db78a0747ab-kube-api-access-qmwd4\") on node \"crc\" DevicePath \"\"" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.083437 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12fce7e9-2d4b-4e58-b762-2db78a0747ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.484399 4606 generic.go:334] "Generic (PLEG): container finished" podID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerID="f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5" exitCode=0 Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.484452 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-684bn" event={"ID":"12fce7e9-2d4b-4e58-b762-2db78a0747ab","Type":"ContainerDied","Data":"f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5"} Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.484487 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-684bn" event={"ID":"12fce7e9-2d4b-4e58-b762-2db78a0747ab","Type":"ContainerDied","Data":"1ef287507b058a5fd35e1da7f7347402a74b2ebccf8f9c0c9b081289b72efe2c"} Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.484507 4606 scope.go:117] "RemoveContainer" containerID="f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.484687 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-684bn" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.517922 4606 scope.go:117] "RemoveContainer" containerID="ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.521887 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-684bn"] Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.533016 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-684bn"] Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.538812 4606 scope.go:117] "RemoveContainer" containerID="053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.580740 4606 scope.go:117] "RemoveContainer" containerID="f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5" Jan 23 17:19:32 crc kubenswrapper[4606]: E0123 17:19:32.581185 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5\": container with ID starting with f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5 not found: ID does not exist" containerID="f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.581233 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5"} err="failed to get container status \"f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5\": rpc error: code = NotFound desc = could not find container \"f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5\": container with ID starting with f99b659d89b2643650c8a94cb9f4b9c4444215f29ef256aab283c583471667d5 not found: ID does not exist" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.581265 4606 scope.go:117] "RemoveContainer" containerID="ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429" Jan 23 17:19:32 crc kubenswrapper[4606]: E0123 17:19:32.581701 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429\": container with ID starting with ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429 not found: ID does not exist" containerID="ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.581738 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429"} err="failed to get container status \"ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429\": rpc error: code = NotFound desc = could not find container \"ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429\": container with ID starting with ae4af3c972acb19eb9d074eec4689cf19ba0b191bdae3e92254a843a3ea3d429 not found: ID does not exist" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.581765 4606 scope.go:117] "RemoveContainer" containerID="053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97" Jan 23 17:19:32 crc kubenswrapper[4606]: E0123 17:19:32.582378 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97\": container with ID starting with 053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97 not found: ID does not exist" containerID="053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97" Jan 23 17:19:32 crc kubenswrapper[4606]: I0123 17:19:32.582406 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97"} err="failed to get container status \"053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97\": rpc error: code = NotFound desc = could not find container \"053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97\": container with ID starting with 053b310930f771f0740bea5e4f3bdc4f9d5ec9aab735f7a461c72381af264b97 not found: ID does not exist" Jan 23 17:19:34 crc kubenswrapper[4606]: I0123 17:19:34.379355 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" path="/var/lib/kubelet/pods/12fce7e9-2d4b-4e58-b762-2db78a0747ab/volumes" Jan 23 17:19:54 crc kubenswrapper[4606]: I0123 17:19:54.047463 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5vzbb"] Jan 23 17:19:54 crc kubenswrapper[4606]: I0123 17:19:54.057933 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5vzbb"] Jan 23 17:19:54 crc kubenswrapper[4606]: I0123 17:19:54.397964 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1b5e49-bdba-4463-b025-fc30bedd63f0" path="/var/lib/kubelet/pods/5b1b5e49-bdba-4463-b025-fc30bedd63f0/volumes" Jan 23 17:19:59 crc kubenswrapper[4606]: I0123 17:19:59.033335 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bmv5b"] Jan 23 17:19:59 crc kubenswrapper[4606]: I0123 17:19:59.041498 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bmv5b"] Jan 23 17:19:59 crc kubenswrapper[4606]: I0123 17:19:59.934023 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:19:59 crc kubenswrapper[4606]: I0123 17:19:59.934316 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:20:00 crc kubenswrapper[4606]: I0123 17:20:00.378823 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6d384e-69de-4bd3-bb7b-bf9346614fce" path="/var/lib/kubelet/pods/fb6d384e-69de-4bd3-bb7b-bf9346614fce/volumes" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.073138 4606 scope.go:117] "RemoveContainer" containerID="02b7431c789b147be7ec6dfadae75a2b1b3016107f2a2b2dc530d465d61f68ef" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.105828 4606 scope.go:117] "RemoveContainer" containerID="7b6728a5d45196df7500a0fec2d6ea8cb55ab4fc05dfcb2a8369dfdde6af8743" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.166729 4606 scope.go:117] "RemoveContainer" containerID="f5a35f4a31f2db0c926042d0e165f78ba5ced4fc693ed5b73c6b4e0440d6c6bb" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.214498 4606 scope.go:117] "RemoveContainer" containerID="559f2bb5f722470fe194d641485850589f46a17a6cdcea4eecb03e939cbc8c97" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.267241 4606 scope.go:117] "RemoveContainer" containerID="7facb45390c548164fd91699de258f3b275995c36e5391edf94c935531598be3" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.336568 4606 scope.go:117] "RemoveContainer" containerID="b41d4e5cce1a1b43643708e0920cfc05a42b495fbe4dd38acf3882001122df28" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.367712 4606 scope.go:117] "RemoveContainer" containerID="bf4537c8d80fc80349cb42fba3a80597830cf9272e8d6ccad67ac6585ff13c8e" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.388916 4606 scope.go:117] "RemoveContainer" containerID="8a79ce64d8335aaa5997dac66e74b427527b879b31f0d97aef6f310c0cafb834" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.415519 4606 scope.go:117] "RemoveContainer" containerID="724b55a2f44e36d87e571bb11a83d51f056013d9dbca6eedee2c0b16cf4e41da" Jan 23 17:20:01 crc kubenswrapper[4606]: I0123 17:20:01.448095 4606 scope.go:117] "RemoveContainer" containerID="4976c32fda720e49b1364a258f2e7fb82b748a5315e6a3549f23d8ac3a6333a2" Jan 23 17:20:10 crc kubenswrapper[4606]: I0123 17:20:10.044340 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bfvh4"] Jan 23 17:20:10 crc kubenswrapper[4606]: I0123 17:20:10.054045 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bfvh4"] Jan 23 17:20:10 crc kubenswrapper[4606]: I0123 17:20:10.379141 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139f2d63-a754-454b-8fe5-fb33c30ec2bf" path="/var/lib/kubelet/pods/139f2d63-a754-454b-8fe5-fb33c30ec2bf/volumes" Jan 23 17:20:17 crc kubenswrapper[4606]: I0123 17:20:17.030486 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-knccn"] Jan 23 17:20:17 crc kubenswrapper[4606]: I0123 17:20:17.042737 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-knccn"] Jan 23 17:20:18 crc kubenswrapper[4606]: I0123 17:20:18.377949 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f" path="/var/lib/kubelet/pods/9fcf1ef4-b045-4fe1-9fd0-3d81d3bd8d1f/volumes" Jan 23 17:20:22 crc kubenswrapper[4606]: I0123 17:20:22.030441 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tjf95"] Jan 23 17:20:22 crc kubenswrapper[4606]: I0123 17:20:22.037712 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tjf95"] Jan 23 17:20:22 crc kubenswrapper[4606]: I0123 17:20:22.377012 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5144cd-47f2-4ce5-81d6-68884dbfc739" path="/var/lib/kubelet/pods/3f5144cd-47f2-4ce5-81d6-68884dbfc739/volumes" Jan 23 17:20:24 crc kubenswrapper[4606]: I0123 17:20:24.036573 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lzz5x"] Jan 23 17:20:24 crc kubenswrapper[4606]: I0123 17:20:24.045544 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lzz5x"] Jan 23 17:20:24 crc kubenswrapper[4606]: I0123 17:20:24.379608 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb98288-9db7-4a8f-9b1d-1d57a547062b" path="/var/lib/kubelet/pods/4cb98288-9db7-4a8f-9b1d-1d57a547062b/volumes" Jan 23 17:20:29 crc kubenswrapper[4606]: I0123 17:20:29.933772 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:20:29 crc kubenswrapper[4606]: I0123 17:20:29.934303 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:20:52 crc kubenswrapper[4606]: I0123 17:20:52.181270 4606 generic.go:334] "Generic (PLEG): container finished" podID="c268f2de-72d2-457d-b7e4-192af4f43fb8" containerID="3f7de9c60950726787720465719a63855bee5a254c49c63555390d4d48b27172" exitCode=0 Jan 23 17:20:52 crc kubenswrapper[4606]: I0123 17:20:52.181400 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" event={"ID":"c268f2de-72d2-457d-b7e4-192af4f43fb8","Type":"ContainerDied","Data":"3f7de9c60950726787720465719a63855bee5a254c49c63555390d4d48b27172"} Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.613751 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.744378 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-ssh-key-openstack-edpm-ipam\") pod \"c268f2de-72d2-457d-b7e4-192af4f43fb8\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.744608 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpgm9\" (UniqueName: \"kubernetes.io/projected/c268f2de-72d2-457d-b7e4-192af4f43fb8-kube-api-access-vpgm9\") pod \"c268f2de-72d2-457d-b7e4-192af4f43fb8\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.744674 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-inventory\") pod \"c268f2de-72d2-457d-b7e4-192af4f43fb8\" (UID: \"c268f2de-72d2-457d-b7e4-192af4f43fb8\") " Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.751869 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c268f2de-72d2-457d-b7e4-192af4f43fb8-kube-api-access-vpgm9" (OuterVolumeSpecName: "kube-api-access-vpgm9") pod "c268f2de-72d2-457d-b7e4-192af4f43fb8" (UID: "c268f2de-72d2-457d-b7e4-192af4f43fb8"). InnerVolumeSpecName "kube-api-access-vpgm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.773014 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c268f2de-72d2-457d-b7e4-192af4f43fb8" (UID: "c268f2de-72d2-457d-b7e4-192af4f43fb8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.772942 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-inventory" (OuterVolumeSpecName: "inventory") pod "c268f2de-72d2-457d-b7e4-192af4f43fb8" (UID: "c268f2de-72d2-457d-b7e4-192af4f43fb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.846640 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.846675 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpgm9\" (UniqueName: \"kubernetes.io/projected/c268f2de-72d2-457d-b7e4-192af4f43fb8-kube-api-access-vpgm9\") on node \"crc\" DevicePath \"\"" Jan 23 17:20:53 crc kubenswrapper[4606]: I0123 17:20:53.846685 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c268f2de-72d2-457d-b7e4-192af4f43fb8-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.201103 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" event={"ID":"c268f2de-72d2-457d-b7e4-192af4f43fb8","Type":"ContainerDied","Data":"d67a07aaec04851472010218a2e65ffc3242afb82ba0b416e7d3756e003daf1f"} Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.201435 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d67a07aaec04851472010218a2e65ffc3242afb82ba0b416e7d3756e003daf1f" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.201183 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.324771 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn"] Jan 23 17:20:54 crc kubenswrapper[4606]: E0123 17:20:54.325272 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="extract-content" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.325299 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="extract-content" Jan 23 17:20:54 crc kubenswrapper[4606]: E0123 17:20:54.325336 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="extract-utilities" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.325347 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="extract-utilities" Jan 23 17:20:54 crc kubenswrapper[4606]: E0123 17:20:54.325359 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="registry-server" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.325368 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="registry-server" Jan 23 17:20:54 crc kubenswrapper[4606]: E0123 17:20:54.325393 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c268f2de-72d2-457d-b7e4-192af4f43fb8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.325402 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c268f2de-72d2-457d-b7e4-192af4f43fb8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.325649 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c268f2de-72d2-457d-b7e4-192af4f43fb8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.325683 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="12fce7e9-2d4b-4e58-b762-2db78a0747ab" containerName="registry-server" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.326470 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.328454 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.328769 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.328966 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.329211 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.339756 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn"] Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.463508 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m2jz\" (UniqueName: \"kubernetes.io/projected/f7c7e859-304f-4f42-b29a-968644679668-kube-api-access-2m2jz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.464117 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.464414 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.566072 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m2jz\" (UniqueName: \"kubernetes.io/projected/f7c7e859-304f-4f42-b29a-968644679668-kube-api-access-2m2jz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.566164 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.566287 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.570286 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.572852 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.584303 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m2jz\" (UniqueName: \"kubernetes.io/projected/f7c7e859-304f-4f42-b29a-968644679668-kube-api-access-2m2jz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:54 crc kubenswrapper[4606]: I0123 17:20:54.646640 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:20:55 crc kubenswrapper[4606]: I0123 17:20:55.172810 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn"] Jan 23 17:20:55 crc kubenswrapper[4606]: I0123 17:20:55.211256 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" event={"ID":"f7c7e859-304f-4f42-b29a-968644679668","Type":"ContainerStarted","Data":"a3d6a34676b79eb4cf3dccd482620b2b11a829d0b6212f2cea4da41f5a291fb1"} Jan 23 17:20:56 crc kubenswrapper[4606]: I0123 17:20:56.224165 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" event={"ID":"f7c7e859-304f-4f42-b29a-968644679668","Type":"ContainerStarted","Data":"d47ecc65e2a05bc0a78c1e809fc4fb8dc9c59818005a411cc301b5f3408b938a"} Jan 23 17:20:56 crc kubenswrapper[4606]: I0123 17:20:56.252795 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" podStartSLOduration=1.835354071 podStartE2EDuration="2.252768115s" podCreationTimestamp="2026-01-23 17:20:54 +0000 UTC" firstStartedPulling="2026-01-23 17:20:55.178824935 +0000 UTC m=+1689.179017341" lastFinishedPulling="2026-01-23 17:20:55.596238979 +0000 UTC m=+1689.596431385" observedRunningTime="2026-01-23 17:20:56.251176106 +0000 UTC m=+1690.251368562" watchObservedRunningTime="2026-01-23 17:20:56.252768115 +0000 UTC m=+1690.252960521" Jan 23 17:20:59 crc kubenswrapper[4606]: I0123 17:20:59.934536 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:20:59 crc kubenswrapper[4606]: I0123 17:20:59.935060 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:20:59 crc kubenswrapper[4606]: I0123 17:20:59.935109 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:20:59 crc kubenswrapper[4606]: I0123 17:20:59.935786 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:20:59 crc kubenswrapper[4606]: I0123 17:20:59.935860 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" gracePeriod=600 Jan 23 17:21:00 crc kubenswrapper[4606]: E0123 17:21:00.059974 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:21:00 crc kubenswrapper[4606]: I0123 17:21:00.265049 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" exitCode=0 Jan 23 17:21:00 crc kubenswrapper[4606]: I0123 17:21:00.265097 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12"} Jan 23 17:21:00 crc kubenswrapper[4606]: I0123 17:21:00.265130 4606 scope.go:117] "RemoveContainer" containerID="ebadbc612659691d0364633624880b271d3a98bbea2b5aa4b6e90c782af998b4" Jan 23 17:21:00 crc kubenswrapper[4606]: I0123 17:21:00.265800 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:21:00 crc kubenswrapper[4606]: E0123 17:21:00.266117 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:21:01 crc kubenswrapper[4606]: I0123 17:21:01.697434 4606 scope.go:117] "RemoveContainer" containerID="6bf66b0648928ec3da6406e29a4b4815276089c8eeb8cacd1ec04a5038cf556b" Jan 23 17:21:01 crc kubenswrapper[4606]: I0123 17:21:01.735929 4606 scope.go:117] "RemoveContainer" containerID="8c97855e228239eb4102ba560d1b1d4c16dfd13eabe648f22a0475e5746e2e17" Jan 23 17:21:01 crc kubenswrapper[4606]: I0123 17:21:01.805256 4606 scope.go:117] "RemoveContainer" containerID="bfd49bb10524308a057dddd25e9a9a4339af84dcd8a64c4cffe93ed05d806a38" Jan 23 17:21:01 crc kubenswrapper[4606]: I0123 17:21:01.853365 4606 scope.go:117] "RemoveContainer" containerID="c943a34586c3106f2095d0f74dadf0913d2d539bc0caa98725104d0537bf40fe" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.365738 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:21:12 crc kubenswrapper[4606]: E0123 17:21:12.366490 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.585173 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7fsh"] Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.588043 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.598937 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7fsh"] Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.724693 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-utilities\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.724992 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-catalog-content\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.725036 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfpj7\" (UniqueName: \"kubernetes.io/projected/e364cf00-0c3f-4e84-8480-bcc48a939cf7-kube-api-access-sfpj7\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.826381 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-catalog-content\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.826433 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfpj7\" (UniqueName: \"kubernetes.io/projected/e364cf00-0c3f-4e84-8480-bcc48a939cf7-kube-api-access-sfpj7\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.826559 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-utilities\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.827254 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-catalog-content\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.827372 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-utilities\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.855364 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfpj7\" (UniqueName: \"kubernetes.io/projected/e364cf00-0c3f-4e84-8480-bcc48a939cf7-kube-api-access-sfpj7\") pod \"redhat-marketplace-q7fsh\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:12 crc kubenswrapper[4606]: I0123 17:21:12.936152 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:13 crc kubenswrapper[4606]: I0123 17:21:13.406075 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7fsh"] Jan 23 17:21:14 crc kubenswrapper[4606]: I0123 17:21:14.382915 4606 generic.go:334] "Generic (PLEG): container finished" podID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerID="3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd" exitCode=0 Jan 23 17:21:14 crc kubenswrapper[4606]: I0123 17:21:14.382994 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7fsh" event={"ID":"e364cf00-0c3f-4e84-8480-bcc48a939cf7","Type":"ContainerDied","Data":"3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd"} Jan 23 17:21:14 crc kubenswrapper[4606]: I0123 17:21:14.383060 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7fsh" event={"ID":"e364cf00-0c3f-4e84-8480-bcc48a939cf7","Type":"ContainerStarted","Data":"2a74b1fa24bffbe20fea8eb7be6c4b1f1fc224d259d93eac7e6c2cf88927adc9"} Jan 23 17:21:16 crc kubenswrapper[4606]: I0123 17:21:16.403312 4606 generic.go:334] "Generic (PLEG): container finished" podID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerID="d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f" exitCode=0 Jan 23 17:21:16 crc kubenswrapper[4606]: I0123 17:21:16.403357 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7fsh" event={"ID":"e364cf00-0c3f-4e84-8480-bcc48a939cf7","Type":"ContainerDied","Data":"d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f"} Jan 23 17:21:17 crc kubenswrapper[4606]: I0123 17:21:17.416774 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7fsh" event={"ID":"e364cf00-0c3f-4e84-8480-bcc48a939cf7","Type":"ContainerStarted","Data":"550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c"} Jan 23 17:21:17 crc kubenswrapper[4606]: I0123 17:21:17.444579 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7fsh" podStartSLOduration=2.829796042 podStartE2EDuration="5.444562348s" podCreationTimestamp="2026-01-23 17:21:12 +0000 UTC" firstStartedPulling="2026-01-23 17:21:14.384968647 +0000 UTC m=+1708.385161053" lastFinishedPulling="2026-01-23 17:21:16.999734953 +0000 UTC m=+1710.999927359" observedRunningTime="2026-01-23 17:21:17.43890526 +0000 UTC m=+1711.439097676" watchObservedRunningTime="2026-01-23 17:21:17.444562348 +0000 UTC m=+1711.444754754" Jan 23 17:21:20 crc kubenswrapper[4606]: I0123 17:21:20.039439 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5b37-account-create-update-pr4c4"] Jan 23 17:21:20 crc kubenswrapper[4606]: I0123 17:21:20.049347 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5b37-account-create-update-pr4c4"] Jan 23 17:21:20 crc kubenswrapper[4606]: I0123 17:21:20.378960 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8d7c07-f254-476e-9f2e-7e1afdeddcad" path="/var/lib/kubelet/pods/dd8d7c07-f254-476e-9f2e-7e1afdeddcad/volumes" Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.035082 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cx5vj"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.043258 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-df02-account-create-update-qqvbw"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.049933 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-71a8-account-create-update-k7h4q"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.057217 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-c6g8c"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.068086 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6nkhr"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.074913 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-df02-account-create-update-qqvbw"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.081914 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cx5vj"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.088609 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-71a8-account-create-update-k7h4q"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.095818 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6nkhr"] Jan 23 17:21:21 crc kubenswrapper[4606]: I0123 17:21:21.102895 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-c6g8c"] Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.379662 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0722c265-6ff0-4f82-93c9-f803a72040f5" path="/var/lib/kubelet/pods/0722c265-6ff0-4f82-93c9-f803a72040f5/volumes" Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.381012 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5656c482-3ac8-450a-b302-9e8329674ca5" path="/var/lib/kubelet/pods/5656c482-3ac8-450a-b302-9e8329674ca5/volumes" Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.382295 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a24014-72c4-4240-92ac-f0a2cc2956e2" path="/var/lib/kubelet/pods/57a24014-72c4-4240-92ac-f0a2cc2956e2/volumes" Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.384163 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2b6537-c2b4-45ed-86ad-6c1018119db1" path="/var/lib/kubelet/pods/5c2b6537-c2b4-45ed-86ad-6c1018119db1/volumes" Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.386894 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c34151-9d7b-4d74-9dcd-ef62daa52555" path="/var/lib/kubelet/pods/b8c34151-9d7b-4d74-9dcd-ef62daa52555/volumes" Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.936945 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.936981 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:22 crc kubenswrapper[4606]: I0123 17:21:22.993138 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:23 crc kubenswrapper[4606]: I0123 17:21:23.541080 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:23 crc kubenswrapper[4606]: I0123 17:21:23.586844 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7fsh"] Jan 23 17:21:24 crc kubenswrapper[4606]: I0123 17:21:24.367132 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:21:24 crc kubenswrapper[4606]: E0123 17:21:24.367818 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:21:25 crc kubenswrapper[4606]: I0123 17:21:25.481615 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7fsh" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="registry-server" containerID="cri-o://550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c" gracePeriod=2 Jan 23 17:21:25 crc kubenswrapper[4606]: I0123 17:21:25.936701 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.104899 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-catalog-content\") pod \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.105099 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfpj7\" (UniqueName: \"kubernetes.io/projected/e364cf00-0c3f-4e84-8480-bcc48a939cf7-kube-api-access-sfpj7\") pod \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.105131 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-utilities\") pod \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\" (UID: \"e364cf00-0c3f-4e84-8480-bcc48a939cf7\") " Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.106120 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-utilities" (OuterVolumeSpecName: "utilities") pod "e364cf00-0c3f-4e84-8480-bcc48a939cf7" (UID: "e364cf00-0c3f-4e84-8480-bcc48a939cf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.115547 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e364cf00-0c3f-4e84-8480-bcc48a939cf7-kube-api-access-sfpj7" (OuterVolumeSpecName: "kube-api-access-sfpj7") pod "e364cf00-0c3f-4e84-8480-bcc48a939cf7" (UID: "e364cf00-0c3f-4e84-8480-bcc48a939cf7"). InnerVolumeSpecName "kube-api-access-sfpj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.207994 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfpj7\" (UniqueName: \"kubernetes.io/projected/e364cf00-0c3f-4e84-8480-bcc48a939cf7-kube-api-access-sfpj7\") on node \"crc\" DevicePath \"\"" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.208035 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.250446 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e364cf00-0c3f-4e84-8480-bcc48a939cf7" (UID: "e364cf00-0c3f-4e84-8480-bcc48a939cf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.310244 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e364cf00-0c3f-4e84-8480-bcc48a939cf7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.491785 4606 generic.go:334] "Generic (PLEG): container finished" podID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerID="550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c" exitCode=0 Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.491826 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7fsh" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.491822 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7fsh" event={"ID":"e364cf00-0c3f-4e84-8480-bcc48a939cf7","Type":"ContainerDied","Data":"550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c"} Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.491929 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7fsh" event={"ID":"e364cf00-0c3f-4e84-8480-bcc48a939cf7","Type":"ContainerDied","Data":"2a74b1fa24bffbe20fea8eb7be6c4b1f1fc224d259d93eac7e6c2cf88927adc9"} Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.491947 4606 scope.go:117] "RemoveContainer" containerID="550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.519304 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7fsh"] Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.526261 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7fsh"] Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.526813 4606 scope.go:117] "RemoveContainer" containerID="d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.546810 4606 scope.go:117] "RemoveContainer" containerID="3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.590850 4606 scope.go:117] "RemoveContainer" containerID="550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c" Jan 23 17:21:26 crc kubenswrapper[4606]: E0123 17:21:26.591351 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c\": container with ID starting with 550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c not found: ID does not exist" containerID="550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.591477 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c"} err="failed to get container status \"550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c\": rpc error: code = NotFound desc = could not find container \"550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c\": container with ID starting with 550baabe02dc544c4b40c82d6d3b23beb5d7bb5c9334283b423481c814486b1c not found: ID does not exist" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.591605 4606 scope.go:117] "RemoveContainer" containerID="d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f" Jan 23 17:21:26 crc kubenswrapper[4606]: E0123 17:21:26.592207 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f\": container with ID starting with d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f not found: ID does not exist" containerID="d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.592247 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f"} err="failed to get container status \"d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f\": rpc error: code = NotFound desc = could not find container \"d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f\": container with ID starting with d297799d9c7843bdc23d7f16ea9a6bdf658c27365924b9a2888caac22a17928f not found: ID does not exist" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.592275 4606 scope.go:117] "RemoveContainer" containerID="3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd" Jan 23 17:21:26 crc kubenswrapper[4606]: E0123 17:21:26.592557 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd\": container with ID starting with 3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd not found: ID does not exist" containerID="3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd" Jan 23 17:21:26 crc kubenswrapper[4606]: I0123 17:21:26.592601 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd"} err="failed to get container status \"3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd\": rpc error: code = NotFound desc = could not find container \"3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd\": container with ID starting with 3355d3a6d97cdb656115ffac5f262e153eb5609dd18bbbd2b64c7034fa8367cd not found: ID does not exist" Jan 23 17:21:28 crc kubenswrapper[4606]: I0123 17:21:28.379249 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" path="/var/lib/kubelet/pods/e364cf00-0c3f-4e84-8480-bcc48a939cf7/volumes" Jan 23 17:21:38 crc kubenswrapper[4606]: I0123 17:21:38.367919 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:21:38 crc kubenswrapper[4606]: E0123 17:21:38.368778 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:21:53 crc kubenswrapper[4606]: I0123 17:21:53.365961 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:21:53 crc kubenswrapper[4606]: E0123 17:21:53.366741 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:21:56 crc kubenswrapper[4606]: I0123 17:21:56.041988 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzltv"] Jan 23 17:21:56 crc kubenswrapper[4606]: I0123 17:21:56.049955 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zzltv"] Jan 23 17:21:56 crc kubenswrapper[4606]: I0123 17:21:56.375665 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e" path="/var/lib/kubelet/pods/3e2b3369-cd8a-46c6-92e6-39b6a2cf3a9e/volumes" Jan 23 17:22:01 crc kubenswrapper[4606]: I0123 17:22:01.967326 4606 scope.go:117] "RemoveContainer" containerID="8274c717de743f8180f89766f81094c92f073a8828d2289c0d20b05bc4016d40" Jan 23 17:22:02 crc kubenswrapper[4606]: I0123 17:22:02.023504 4606 scope.go:117] "RemoveContainer" containerID="1c4bd2e85d0150c96766c21a16021460bf3a3a78cb2b27df9d9597c0008ff095" Jan 23 17:22:02 crc kubenswrapper[4606]: I0123 17:22:02.064228 4606 scope.go:117] "RemoveContainer" containerID="af1076c67c05a1a67549c3549f507c0d50f2b54d14beded7100cd22fe468c7b1" Jan 23 17:22:02 crc kubenswrapper[4606]: I0123 17:22:02.101734 4606 scope.go:117] "RemoveContainer" containerID="d162427befa56a7e9ebb3908f74f50b83f205a6781a3939b3c2bb587159d710f" Jan 23 17:22:02 crc kubenswrapper[4606]: I0123 17:22:02.145055 4606 scope.go:117] "RemoveContainer" containerID="094209db94003e883032f1317862a9da80988def9d4251696d2c3a96ca43ecb9" Jan 23 17:22:02 crc kubenswrapper[4606]: I0123 17:22:02.184160 4606 scope.go:117] "RemoveContainer" containerID="7165f4684753a99fed01b05edda90067989d4833b1cfa8538a9b79a3210e2c61" Jan 23 17:22:02 crc kubenswrapper[4606]: I0123 17:22:02.232065 4606 scope.go:117] "RemoveContainer" containerID="c009532333b43de0b4b6ddfc01b84a144096b5a31804ae6f35b4eee5a16344ee" Jan 23 17:22:08 crc kubenswrapper[4606]: I0123 17:22:08.374463 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:22:08 crc kubenswrapper[4606]: E0123 17:22:08.375442 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:22:12 crc kubenswrapper[4606]: I0123 17:22:12.882459 4606 generic.go:334] "Generic (PLEG): container finished" podID="f7c7e859-304f-4f42-b29a-968644679668" containerID="d47ecc65e2a05bc0a78c1e809fc4fb8dc9c59818005a411cc301b5f3408b938a" exitCode=0 Jan 23 17:22:12 crc kubenswrapper[4606]: I0123 17:22:12.882600 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" event={"ID":"f7c7e859-304f-4f42-b29a-968644679668","Type":"ContainerDied","Data":"d47ecc65e2a05bc0a78c1e809fc4fb8dc9c59818005a411cc301b5f3408b938a"} Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.249143 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.274887 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-ssh-key-openstack-edpm-ipam\") pod \"f7c7e859-304f-4f42-b29a-968644679668\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.275061 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-inventory\") pod \"f7c7e859-304f-4f42-b29a-968644679668\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.275150 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m2jz\" (UniqueName: \"kubernetes.io/projected/f7c7e859-304f-4f42-b29a-968644679668-kube-api-access-2m2jz\") pod \"f7c7e859-304f-4f42-b29a-968644679668\" (UID: \"f7c7e859-304f-4f42-b29a-968644679668\") " Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.281080 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c7e859-304f-4f42-b29a-968644679668-kube-api-access-2m2jz" (OuterVolumeSpecName: "kube-api-access-2m2jz") pod "f7c7e859-304f-4f42-b29a-968644679668" (UID: "f7c7e859-304f-4f42-b29a-968644679668"). InnerVolumeSpecName "kube-api-access-2m2jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.309740 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-inventory" (OuterVolumeSpecName: "inventory") pod "f7c7e859-304f-4f42-b29a-968644679668" (UID: "f7c7e859-304f-4f42-b29a-968644679668"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.321693 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7c7e859-304f-4f42-b29a-968644679668" (UID: "f7c7e859-304f-4f42-b29a-968644679668"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.377948 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m2jz\" (UniqueName: \"kubernetes.io/projected/f7c7e859-304f-4f42-b29a-968644679668-kube-api-access-2m2jz\") on node \"crc\" DevicePath \"\"" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.377990 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.378004 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7c7e859-304f-4f42-b29a-968644679668-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.900733 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" event={"ID":"f7c7e859-304f-4f42-b29a-968644679668","Type":"ContainerDied","Data":"a3d6a34676b79eb4cf3dccd482620b2b11a829d0b6212f2cea4da41f5a291fb1"} Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.901121 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d6a34676b79eb4cf3dccd482620b2b11a829d0b6212f2cea4da41f5a291fb1" Jan 23 17:22:14 crc kubenswrapper[4606]: I0123 17:22:14.900798 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.007946 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9"] Jan 23 17:22:15 crc kubenswrapper[4606]: E0123 17:22:15.008317 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="registry-server" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.008335 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="registry-server" Jan 23 17:22:15 crc kubenswrapper[4606]: E0123 17:22:15.008358 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="extract-utilities" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.008365 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="extract-utilities" Jan 23 17:22:15 crc kubenswrapper[4606]: E0123 17:22:15.008381 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c7e859-304f-4f42-b29a-968644679668" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.008389 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c7e859-304f-4f42-b29a-968644679668" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 17:22:15 crc kubenswrapper[4606]: E0123 17:22:15.008410 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="extract-content" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.008417 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="extract-content" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.008582 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="e364cf00-0c3f-4e84-8480-bcc48a939cf7" containerName="registry-server" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.008630 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c7e859-304f-4f42-b29a-968644679668" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.009187 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.017478 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.017910 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.018004 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.018498 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.029172 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9"] Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.090840 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55b67\" (UniqueName: \"kubernetes.io/projected/7b50ff61-a29f-498a-9a41-294c3ad27ba5-kube-api-access-55b67\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.090980 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.091047 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.192713 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55b67\" (UniqueName: \"kubernetes.io/projected/7b50ff61-a29f-498a-9a41-294c3ad27ba5-kube-api-access-55b67\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.192797 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.192862 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.197729 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.197941 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.209732 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55b67\" (UniqueName: \"kubernetes.io/projected/7b50ff61-a29f-498a-9a41-294c3ad27ba5-kube-api-access-55b67\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.324553 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.859747 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9"] Jan 23 17:22:15 crc kubenswrapper[4606]: I0123 17:22:15.913022 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" event={"ID":"7b50ff61-a29f-498a-9a41-294c3ad27ba5","Type":"ContainerStarted","Data":"fbfac3c54bc41124d0ce456084442c3de497175dd9c4f1f141532726c8670e13"} Jan 23 17:22:16 crc kubenswrapper[4606]: I0123 17:22:16.923340 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" event={"ID":"7b50ff61-a29f-498a-9a41-294c3ad27ba5","Type":"ContainerStarted","Data":"11dcd666825d764503529b628d20bd8f2d869b412d4a1521f74fd5ff2675207b"} Jan 23 17:22:16 crc kubenswrapper[4606]: I0123 17:22:16.941835 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" podStartSLOduration=2.411615523 podStartE2EDuration="2.941814999s" podCreationTimestamp="2026-01-23 17:22:14 +0000 UTC" firstStartedPulling="2026-01-23 17:22:15.873093314 +0000 UTC m=+1769.873285720" lastFinishedPulling="2026-01-23 17:22:16.40329279 +0000 UTC m=+1770.403485196" observedRunningTime="2026-01-23 17:22:16.941397319 +0000 UTC m=+1770.941589725" watchObservedRunningTime="2026-01-23 17:22:16.941814999 +0000 UTC m=+1770.942007425" Jan 23 17:22:19 crc kubenswrapper[4606]: I0123 17:22:19.366042 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:22:19 crc kubenswrapper[4606]: E0123 17:22:19.366584 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:22:20 crc kubenswrapper[4606]: I0123 17:22:20.037704 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqzlc"] Jan 23 17:22:20 crc kubenswrapper[4606]: I0123 17:22:20.046170 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lqzlc"] Jan 23 17:22:20 crc kubenswrapper[4606]: I0123 17:22:20.381713 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc7ebc9-1a3e-41e8-a0c3-269d425ec721" path="/var/lib/kubelet/pods/4cc7ebc9-1a3e-41e8-a0c3-269d425ec721/volumes" Jan 23 17:22:21 crc kubenswrapper[4606]: I0123 17:22:21.963891 4606 generic.go:334] "Generic (PLEG): container finished" podID="7b50ff61-a29f-498a-9a41-294c3ad27ba5" containerID="11dcd666825d764503529b628d20bd8f2d869b412d4a1521f74fd5ff2675207b" exitCode=0 Jan 23 17:22:21 crc kubenswrapper[4606]: I0123 17:22:21.964265 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" event={"ID":"7b50ff61-a29f-498a-9a41-294c3ad27ba5","Type":"ContainerDied","Data":"11dcd666825d764503529b628d20bd8f2d869b412d4a1521f74fd5ff2675207b"} Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.035151 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m76hq"] Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.042155 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m76hq"] Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.395951 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.457373 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55b67\" (UniqueName: \"kubernetes.io/projected/7b50ff61-a29f-498a-9a41-294c3ad27ba5-kube-api-access-55b67\") pod \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.457555 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-inventory\") pod \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.457618 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-ssh-key-openstack-edpm-ipam\") pod \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\" (UID: \"7b50ff61-a29f-498a-9a41-294c3ad27ba5\") " Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.463337 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b50ff61-a29f-498a-9a41-294c3ad27ba5-kube-api-access-55b67" (OuterVolumeSpecName: "kube-api-access-55b67") pod "7b50ff61-a29f-498a-9a41-294c3ad27ba5" (UID: "7b50ff61-a29f-498a-9a41-294c3ad27ba5"). InnerVolumeSpecName "kube-api-access-55b67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.495430 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-inventory" (OuterVolumeSpecName: "inventory") pod "7b50ff61-a29f-498a-9a41-294c3ad27ba5" (UID: "7b50ff61-a29f-498a-9a41-294c3ad27ba5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.495860 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b50ff61-a29f-498a-9a41-294c3ad27ba5" (UID: "7b50ff61-a29f-498a-9a41-294c3ad27ba5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.560138 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.560177 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b50ff61-a29f-498a-9a41-294c3ad27ba5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.560189 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55b67\" (UniqueName: \"kubernetes.io/projected/7b50ff61-a29f-498a-9a41-294c3ad27ba5-kube-api-access-55b67\") on node \"crc\" DevicePath \"\"" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.983199 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" event={"ID":"7b50ff61-a29f-498a-9a41-294c3ad27ba5","Type":"ContainerDied","Data":"fbfac3c54bc41124d0ce456084442c3de497175dd9c4f1f141532726c8670e13"} Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.983243 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbfac3c54bc41124d0ce456084442c3de497175dd9c4f1f141532726c8670e13" Jan 23 17:22:23 crc kubenswrapper[4606]: I0123 17:22:23.983300 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.045438 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh"] Jan 23 17:22:24 crc kubenswrapper[4606]: E0123 17:22:24.045958 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b50ff61-a29f-498a-9a41-294c3ad27ba5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.045976 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b50ff61-a29f-498a-9a41-294c3ad27ba5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.046192 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b50ff61-a29f-498a-9a41-294c3ad27ba5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.046925 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.049236 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.049932 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.051012 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.051441 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.056254 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh"] Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.069285 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.069425 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nttdq\" (UniqueName: \"kubernetes.io/projected/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-kube-api-access-nttdq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.069496 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.171073 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nttdq\" (UniqueName: \"kubernetes.io/projected/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-kube-api-access-nttdq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.171190 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.171258 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.174476 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.174747 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.192219 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nttdq\" (UniqueName: \"kubernetes.io/projected/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-kube-api-access-nttdq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8l2kh\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.380266 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b4e288-0565-452e-b141-fa028539d66e" path="/var/lib/kubelet/pods/d4b4e288-0565-452e-b141-fa028539d66e/volumes" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.395959 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.907194 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh"] Jan 23 17:22:24 crc kubenswrapper[4606]: I0123 17:22:24.991648 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" event={"ID":"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5","Type":"ContainerStarted","Data":"77592b10b8e55d4c10a817586bd7bf0f42e1a390c4ada46014640969077f1d78"} Jan 23 17:22:26 crc kubenswrapper[4606]: I0123 17:22:26.002932 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" event={"ID":"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5","Type":"ContainerStarted","Data":"52ce22cf947809cb68468f152fbea821d6269417445ff849b822c33b603b4f63"} Jan 23 17:22:32 crc kubenswrapper[4606]: I0123 17:22:32.366043 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:22:32 crc kubenswrapper[4606]: E0123 17:22:32.366760 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:22:45 crc kubenswrapper[4606]: I0123 17:22:45.366768 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:22:45 crc kubenswrapper[4606]: E0123 17:22:45.367913 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:22:56 crc kubenswrapper[4606]: I0123 17:22:56.371385 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:22:56 crc kubenswrapper[4606]: E0123 17:22:56.372142 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:23:02 crc kubenswrapper[4606]: I0123 17:23:02.411364 4606 scope.go:117] "RemoveContainer" containerID="95d4a6b67ebafbaafe1a527206a9b06c06bd17fab3abfd11617056641f14588a" Jan 23 17:23:02 crc kubenswrapper[4606]: I0123 17:23:02.445369 4606 scope.go:117] "RemoveContainer" containerID="75c38c5318faa4f1d0090ee9d7aae13fce60b6e071df831d6f7ecd75532637a3" Jan 23 17:23:06 crc kubenswrapper[4606]: I0123 17:23:06.049024 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" podStartSLOduration=41.615756162 podStartE2EDuration="42.048997401s" podCreationTimestamp="2026-01-23 17:22:24 +0000 UTC" firstStartedPulling="2026-01-23 17:22:24.914964868 +0000 UTC m=+1778.915157274" lastFinishedPulling="2026-01-23 17:22:25.348206117 +0000 UTC m=+1779.348398513" observedRunningTime="2026-01-23 17:22:26.022456664 +0000 UTC m=+1780.022649070" watchObservedRunningTime="2026-01-23 17:23:06.048997401 +0000 UTC m=+1820.049189807" Jan 23 17:23:06 crc kubenswrapper[4606]: I0123 17:23:06.051204 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8fb92"] Jan 23 17:23:06 crc kubenswrapper[4606]: I0123 17:23:06.062085 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8fb92"] Jan 23 17:23:06 crc kubenswrapper[4606]: I0123 17:23:06.381018 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e344f2-398e-4610-b627-97a98e1de0d7" path="/var/lib/kubelet/pods/00e344f2-398e-4610-b627-97a98e1de0d7/volumes" Jan 23 17:23:09 crc kubenswrapper[4606]: I0123 17:23:09.365822 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:23:09 crc kubenswrapper[4606]: E0123 17:23:09.366671 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:23:14 crc kubenswrapper[4606]: I0123 17:23:14.456985 4606 generic.go:334] "Generic (PLEG): container finished" podID="5a624f2d-7f1c-4a1d-b24c-280c35dc69d5" containerID="52ce22cf947809cb68468f152fbea821d6269417445ff849b822c33b603b4f63" exitCode=0 Jan 23 17:23:14 crc kubenswrapper[4606]: I0123 17:23:14.457067 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" event={"ID":"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5","Type":"ContainerDied","Data":"52ce22cf947809cb68468f152fbea821d6269417445ff849b822c33b603b4f63"} Jan 23 17:23:15 crc kubenswrapper[4606]: I0123 17:23:15.937064 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.104144 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nttdq\" (UniqueName: \"kubernetes.io/projected/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-kube-api-access-nttdq\") pod \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.104191 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-inventory\") pod \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.104403 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-ssh-key-openstack-edpm-ipam\") pod \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\" (UID: \"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5\") " Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.111283 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-kube-api-access-nttdq" (OuterVolumeSpecName: "kube-api-access-nttdq") pod "5a624f2d-7f1c-4a1d-b24c-280c35dc69d5" (UID: "5a624f2d-7f1c-4a1d-b24c-280c35dc69d5"). InnerVolumeSpecName "kube-api-access-nttdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.127860 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a624f2d-7f1c-4a1d-b24c-280c35dc69d5" (UID: "5a624f2d-7f1c-4a1d-b24c-280c35dc69d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.134039 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-inventory" (OuterVolumeSpecName: "inventory") pod "5a624f2d-7f1c-4a1d-b24c-280c35dc69d5" (UID: "5a624f2d-7f1c-4a1d-b24c-280c35dc69d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.208607 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nttdq\" (UniqueName: \"kubernetes.io/projected/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-kube-api-access-nttdq\") on node \"crc\" DevicePath \"\"" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.208648 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.208684 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a624f2d-7f1c-4a1d-b24c-280c35dc69d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.477580 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" event={"ID":"5a624f2d-7f1c-4a1d-b24c-280c35dc69d5","Type":"ContainerDied","Data":"77592b10b8e55d4c10a817586bd7bf0f42e1a390c4ada46014640969077f1d78"} Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.477887 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77592b10b8e55d4c10a817586bd7bf0f42e1a390c4ada46014640969077f1d78" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.477666 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8l2kh" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.568402 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6"] Jan 23 17:23:16 crc kubenswrapper[4606]: E0123 17:23:16.568870 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a624f2d-7f1c-4a1d-b24c-280c35dc69d5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.568901 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a624f2d-7f1c-4a1d-b24c-280c35dc69d5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.569161 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a624f2d-7f1c-4a1d-b24c-280c35dc69d5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.569945 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.575059 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.575107 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.575059 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.575461 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.579441 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6"] Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.722042 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.722252 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zxl\" (UniqueName: \"kubernetes.io/projected/2a00bdda-f978-4909-8408-2a7e91fe2927-kube-api-access-49zxl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.722564 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.824192 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.824313 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.824351 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49zxl\" (UniqueName: \"kubernetes.io/projected/2a00bdda-f978-4909-8408-2a7e91fe2927-kube-api-access-49zxl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.838798 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.838894 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.842230 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49zxl\" (UniqueName: \"kubernetes.io/projected/2a00bdda-f978-4909-8408-2a7e91fe2927-kube-api-access-49zxl\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mrds6\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:16 crc kubenswrapper[4606]: I0123 17:23:16.889562 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:23:17 crc kubenswrapper[4606]: I0123 17:23:17.430191 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6"] Jan 23 17:23:17 crc kubenswrapper[4606]: I0123 17:23:17.486840 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" event={"ID":"2a00bdda-f978-4909-8408-2a7e91fe2927","Type":"ContainerStarted","Data":"0c25a4ae766c8421d9c41c4d7d6314de88cabe7d01abd6edb33c6e1f3d4cb083"} Jan 23 17:23:18 crc kubenswrapper[4606]: I0123 17:23:18.495696 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" event={"ID":"2a00bdda-f978-4909-8408-2a7e91fe2927","Type":"ContainerStarted","Data":"4e2f355b51b46d2da03c05ab582f00144c96f25f6b450a34a218575090ab5fdf"} Jan 23 17:23:18 crc kubenswrapper[4606]: I0123 17:23:18.522393 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" podStartSLOduration=2.043062094 podStartE2EDuration="2.522375492s" podCreationTimestamp="2026-01-23 17:23:16 +0000 UTC" firstStartedPulling="2026-01-23 17:23:17.433171085 +0000 UTC m=+1831.433363491" lastFinishedPulling="2026-01-23 17:23:17.912484483 +0000 UTC m=+1831.912676889" observedRunningTime="2026-01-23 17:23:18.51454287 +0000 UTC m=+1832.514735276" watchObservedRunningTime="2026-01-23 17:23:18.522375492 +0000 UTC m=+1832.522567898" Jan 23 17:23:24 crc kubenswrapper[4606]: I0123 17:23:24.365583 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:23:24 crc kubenswrapper[4606]: E0123 17:23:24.366379 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:23:39 crc kubenswrapper[4606]: I0123 17:23:39.366315 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:23:39 crc kubenswrapper[4606]: E0123 17:23:39.367142 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:23:52 crc kubenswrapper[4606]: I0123 17:23:52.366603 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:23:52 crc kubenswrapper[4606]: E0123 17:23:52.367314 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:24:02 crc kubenswrapper[4606]: I0123 17:24:02.564498 4606 scope.go:117] "RemoveContainer" containerID="111d88bb0611c3f9bc820889266cfd327a010e6efd18e5b74deeccef116c060e" Jan 23 17:24:05 crc kubenswrapper[4606]: I0123 17:24:05.366116 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:24:05 crc kubenswrapper[4606]: E0123 17:24:05.366757 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:24:19 crc kubenswrapper[4606]: I0123 17:24:19.366768 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:24:19 crc kubenswrapper[4606]: E0123 17:24:19.367449 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:24:19 crc kubenswrapper[4606]: I0123 17:24:19.631540 4606 generic.go:334] "Generic (PLEG): container finished" podID="2a00bdda-f978-4909-8408-2a7e91fe2927" containerID="4e2f355b51b46d2da03c05ab582f00144c96f25f6b450a34a218575090ab5fdf" exitCode=0 Jan 23 17:24:19 crc kubenswrapper[4606]: I0123 17:24:19.631665 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" event={"ID":"2a00bdda-f978-4909-8408-2a7e91fe2927","Type":"ContainerDied","Data":"4e2f355b51b46d2da03c05ab582f00144c96f25f6b450a34a218575090ab5fdf"} Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.051238 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.145167 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-ssh-key-openstack-edpm-ipam\") pod \"2a00bdda-f978-4909-8408-2a7e91fe2927\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.145236 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-inventory\") pod \"2a00bdda-f978-4909-8408-2a7e91fe2927\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.145326 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49zxl\" (UniqueName: \"kubernetes.io/projected/2a00bdda-f978-4909-8408-2a7e91fe2927-kube-api-access-49zxl\") pod \"2a00bdda-f978-4909-8408-2a7e91fe2927\" (UID: \"2a00bdda-f978-4909-8408-2a7e91fe2927\") " Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.152874 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a00bdda-f978-4909-8408-2a7e91fe2927-kube-api-access-49zxl" (OuterVolumeSpecName: "kube-api-access-49zxl") pod "2a00bdda-f978-4909-8408-2a7e91fe2927" (UID: "2a00bdda-f978-4909-8408-2a7e91fe2927"). InnerVolumeSpecName "kube-api-access-49zxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.174196 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-inventory" (OuterVolumeSpecName: "inventory") pod "2a00bdda-f978-4909-8408-2a7e91fe2927" (UID: "2a00bdda-f978-4909-8408-2a7e91fe2927"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.180330 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2a00bdda-f978-4909-8408-2a7e91fe2927" (UID: "2a00bdda-f978-4909-8408-2a7e91fe2927"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.246559 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.246609 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a00bdda-f978-4909-8408-2a7e91fe2927-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.246621 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49zxl\" (UniqueName: \"kubernetes.io/projected/2a00bdda-f978-4909-8408-2a7e91fe2927-kube-api-access-49zxl\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.650064 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" event={"ID":"2a00bdda-f978-4909-8408-2a7e91fe2927","Type":"ContainerDied","Data":"0c25a4ae766c8421d9c41c4d7d6314de88cabe7d01abd6edb33c6e1f3d4cb083"} Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.650114 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c25a4ae766c8421d9c41c4d7d6314de88cabe7d01abd6edb33c6e1f3d4cb083" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.650153 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mrds6" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.765040 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cwhrv"] Jan 23 17:24:21 crc kubenswrapper[4606]: E0123 17:24:21.765425 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a00bdda-f978-4909-8408-2a7e91fe2927" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.765443 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a00bdda-f978-4909-8408-2a7e91fe2927" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.765673 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a00bdda-f978-4909-8408-2a7e91fe2927" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.766243 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.776870 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.776974 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.776875 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.777133 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.819616 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cwhrv"] Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.863967 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.864041 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.864100 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8r5m\" (UniqueName: \"kubernetes.io/projected/d3a48b8b-1205-4128-a038-65263806b2be-kube-api-access-t8r5m\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.965988 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.966089 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8r5m\" (UniqueName: \"kubernetes.io/projected/d3a48b8b-1205-4128-a038-65263806b2be-kube-api-access-t8r5m\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.966214 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.972212 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.973294 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:21 crc kubenswrapper[4606]: I0123 17:24:21.981978 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8r5m\" (UniqueName: \"kubernetes.io/projected/d3a48b8b-1205-4128-a038-65263806b2be-kube-api-access-t8r5m\") pod \"ssh-known-hosts-edpm-deployment-cwhrv\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:22 crc kubenswrapper[4606]: I0123 17:24:22.104936 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:22 crc kubenswrapper[4606]: I0123 17:24:22.808515 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cwhrv"] Jan 23 17:24:22 crc kubenswrapper[4606]: I0123 17:24:22.817009 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:24:23 crc kubenswrapper[4606]: I0123 17:24:23.678211 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" event={"ID":"d3a48b8b-1205-4128-a038-65263806b2be","Type":"ContainerStarted","Data":"77a3c4d117ee3fcfb0d9d7c0b818ed7ce1100f9e963fef68f3fbfba0e2945201"} Jan 23 17:24:23 crc kubenswrapper[4606]: I0123 17:24:23.678808 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" event={"ID":"d3a48b8b-1205-4128-a038-65263806b2be","Type":"ContainerStarted","Data":"696c3e965aac8e7a47c390bd865fc7d64f36197dc001e0d1a11464ae65579512"} Jan 23 17:24:23 crc kubenswrapper[4606]: I0123 17:24:23.703433 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" podStartSLOduration=2.249739179 podStartE2EDuration="2.703416706s" podCreationTimestamp="2026-01-23 17:24:21 +0000 UTC" firstStartedPulling="2026-01-23 17:24:22.816743027 +0000 UTC m=+1896.816935433" lastFinishedPulling="2026-01-23 17:24:23.270420544 +0000 UTC m=+1897.270612960" observedRunningTime="2026-01-23 17:24:23.693787211 +0000 UTC m=+1897.693979627" watchObservedRunningTime="2026-01-23 17:24:23.703416706 +0000 UTC m=+1897.703609102" Jan 23 17:24:31 crc kubenswrapper[4606]: I0123 17:24:31.366282 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:24:31 crc kubenswrapper[4606]: E0123 17:24:31.366841 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:24:31 crc kubenswrapper[4606]: I0123 17:24:31.740217 4606 generic.go:334] "Generic (PLEG): container finished" podID="d3a48b8b-1205-4128-a038-65263806b2be" containerID="77a3c4d117ee3fcfb0d9d7c0b818ed7ce1100f9e963fef68f3fbfba0e2945201" exitCode=0 Jan 23 17:24:31 crc kubenswrapper[4606]: I0123 17:24:31.740277 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" event={"ID":"d3a48b8b-1205-4128-a038-65263806b2be","Type":"ContainerDied","Data":"77a3c4d117ee3fcfb0d9d7c0b818ed7ce1100f9e963fef68f3fbfba0e2945201"} Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.150952 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.283304 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-inventory-0\") pod \"d3a48b8b-1205-4128-a038-65263806b2be\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.283424 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-ssh-key-openstack-edpm-ipam\") pod \"d3a48b8b-1205-4128-a038-65263806b2be\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.283456 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8r5m\" (UniqueName: \"kubernetes.io/projected/d3a48b8b-1205-4128-a038-65263806b2be-kube-api-access-t8r5m\") pod \"d3a48b8b-1205-4128-a038-65263806b2be\" (UID: \"d3a48b8b-1205-4128-a038-65263806b2be\") " Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.290723 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a48b8b-1205-4128-a038-65263806b2be-kube-api-access-t8r5m" (OuterVolumeSpecName: "kube-api-access-t8r5m") pod "d3a48b8b-1205-4128-a038-65263806b2be" (UID: "d3a48b8b-1205-4128-a038-65263806b2be"). InnerVolumeSpecName "kube-api-access-t8r5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.314578 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3a48b8b-1205-4128-a038-65263806b2be" (UID: "d3a48b8b-1205-4128-a038-65263806b2be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.325513 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d3a48b8b-1205-4128-a038-65263806b2be" (UID: "d3a48b8b-1205-4128-a038-65263806b2be"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.385792 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.385829 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8r5m\" (UniqueName: \"kubernetes.io/projected/d3a48b8b-1205-4128-a038-65263806b2be-kube-api-access-t8r5m\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.385842 4606 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3a48b8b-1205-4128-a038-65263806b2be-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.759913 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" event={"ID":"d3a48b8b-1205-4128-a038-65263806b2be","Type":"ContainerDied","Data":"696c3e965aac8e7a47c390bd865fc7d64f36197dc001e0d1a11464ae65579512"} Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.759949 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="696c3e965aac8e7a47c390bd865fc7d64f36197dc001e0d1a11464ae65579512" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.760369 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cwhrv" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.874939 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn"] Jan 23 17:24:33 crc kubenswrapper[4606]: E0123 17:24:33.875964 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a48b8b-1205-4128-a038-65263806b2be" containerName="ssh-known-hosts-edpm-deployment" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.875988 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a48b8b-1205-4128-a038-65263806b2be" containerName="ssh-known-hosts-edpm-deployment" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.876309 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a48b8b-1205-4128-a038-65263806b2be" containerName="ssh-known-hosts-edpm-deployment" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.877191 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.879149 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.881909 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.882604 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.882709 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.899173 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn"] Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.899285 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.899330 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:33 crc kubenswrapper[4606]: I0123 17:24:33.899473 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljc5j\" (UniqueName: \"kubernetes.io/projected/2c1f814f-8269-4be1-9730-4df95c3eb31b-kube-api-access-ljc5j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.000412 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljc5j\" (UniqueName: \"kubernetes.io/projected/2c1f814f-8269-4be1-9730-4df95c3eb31b-kube-api-access-ljc5j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.000501 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.000527 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.004638 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.012022 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.021851 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljc5j\" (UniqueName: \"kubernetes.io/projected/2c1f814f-8269-4be1-9730-4df95c3eb31b-kube-api-access-ljc5j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trdvn\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.208891 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:34 crc kubenswrapper[4606]: I0123 17:24:34.833253 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn"] Jan 23 17:24:35 crc kubenswrapper[4606]: I0123 17:24:35.777390 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" event={"ID":"2c1f814f-8269-4be1-9730-4df95c3eb31b","Type":"ContainerStarted","Data":"ef78f6e57aebd7aa923a7b3531306ae9b4e4c9b946ba744b8b78ee971708ce19"} Jan 23 17:24:37 crc kubenswrapper[4606]: I0123 17:24:37.808513 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" event={"ID":"2c1f814f-8269-4be1-9730-4df95c3eb31b","Type":"ContainerStarted","Data":"6776c224b09ff2668508874e7c4843c8281dc8d46b054a7b784f31b123e50919"} Jan 23 17:24:37 crc kubenswrapper[4606]: I0123 17:24:37.835362 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" podStartSLOduration=2.996500995 podStartE2EDuration="4.835340994s" podCreationTimestamp="2026-01-23 17:24:33 +0000 UTC" firstStartedPulling="2026-01-23 17:24:34.834869156 +0000 UTC m=+1908.835061562" lastFinishedPulling="2026-01-23 17:24:36.673709155 +0000 UTC m=+1910.673901561" observedRunningTime="2026-01-23 17:24:37.82613992 +0000 UTC m=+1911.826332326" watchObservedRunningTime="2026-01-23 17:24:37.835340994 +0000 UTC m=+1911.835533420" Jan 23 17:24:42 crc kubenswrapper[4606]: I0123 17:24:42.366642 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:24:42 crc kubenswrapper[4606]: E0123 17:24:42.367265 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:24:47 crc kubenswrapper[4606]: I0123 17:24:47.962914 4606 generic.go:334] "Generic (PLEG): container finished" podID="2c1f814f-8269-4be1-9730-4df95c3eb31b" containerID="6776c224b09ff2668508874e7c4843c8281dc8d46b054a7b784f31b123e50919" exitCode=0 Jan 23 17:24:47 crc kubenswrapper[4606]: I0123 17:24:47.962998 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" event={"ID":"2c1f814f-8269-4be1-9730-4df95c3eb31b","Type":"ContainerDied","Data":"6776c224b09ff2668508874e7c4843c8281dc8d46b054a7b784f31b123e50919"} Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.348274 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.458410 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-ssh-key-openstack-edpm-ipam\") pod \"2c1f814f-8269-4be1-9730-4df95c3eb31b\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.459228 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-inventory\") pod \"2c1f814f-8269-4be1-9730-4df95c3eb31b\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.459257 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljc5j\" (UniqueName: \"kubernetes.io/projected/2c1f814f-8269-4be1-9730-4df95c3eb31b-kube-api-access-ljc5j\") pod \"2c1f814f-8269-4be1-9730-4df95c3eb31b\" (UID: \"2c1f814f-8269-4be1-9730-4df95c3eb31b\") " Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.480624 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1f814f-8269-4be1-9730-4df95c3eb31b-kube-api-access-ljc5j" (OuterVolumeSpecName: "kube-api-access-ljc5j") pod "2c1f814f-8269-4be1-9730-4df95c3eb31b" (UID: "2c1f814f-8269-4be1-9730-4df95c3eb31b"). InnerVolumeSpecName "kube-api-access-ljc5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.489032 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-inventory" (OuterVolumeSpecName: "inventory") pod "2c1f814f-8269-4be1-9730-4df95c3eb31b" (UID: "2c1f814f-8269-4be1-9730-4df95c3eb31b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.489609 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2c1f814f-8269-4be1-9730-4df95c3eb31b" (UID: "2c1f814f-8269-4be1-9730-4df95c3eb31b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.562136 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.562173 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljc5j\" (UniqueName: \"kubernetes.io/projected/2c1f814f-8269-4be1-9730-4df95c3eb31b-kube-api-access-ljc5j\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.562184 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c1f814f-8269-4be1-9730-4df95c3eb31b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.980353 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" event={"ID":"2c1f814f-8269-4be1-9730-4df95c3eb31b","Type":"ContainerDied","Data":"ef78f6e57aebd7aa923a7b3531306ae9b4e4c9b946ba744b8b78ee971708ce19"} Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.980405 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef78f6e57aebd7aa923a7b3531306ae9b4e4c9b946ba744b8b78ee971708ce19" Jan 23 17:24:49 crc kubenswrapper[4606]: I0123 17:24:49.980729 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trdvn" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.071390 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s"] Jan 23 17:24:50 crc kubenswrapper[4606]: E0123 17:24:50.071984 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c1f814f-8269-4be1-9730-4df95c3eb31b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.072089 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1f814f-8269-4be1-9730-4df95c3eb31b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.072352 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c1f814f-8269-4be1-9730-4df95c3eb31b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.073139 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.075339 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.075485 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.075707 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.076918 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.083418 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s"] Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.172561 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.172652 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq686\" (UniqueName: \"kubernetes.io/projected/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-kube-api-access-jq686\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.172788 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.273820 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.274082 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq686\" (UniqueName: \"kubernetes.io/projected/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-kube-api-access-jq686\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.274231 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.277673 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.279107 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.294992 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq686\" (UniqueName: \"kubernetes.io/projected/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-kube-api-access-jq686\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.391295 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.913316 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s"] Jan 23 17:24:50 crc kubenswrapper[4606]: I0123 17:24:50.989714 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" event={"ID":"c1b5063e-ec21-4a81-b0d1-af09f9ac9019","Type":"ContainerStarted","Data":"758ce9985d4491c309a9d61dacb8dd3dd3b701586295a5e1408e401ee82047ce"} Jan 23 17:24:51 crc kubenswrapper[4606]: I0123 17:24:51.998800 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" event={"ID":"c1b5063e-ec21-4a81-b0d1-af09f9ac9019","Type":"ContainerStarted","Data":"3f01e0b39f4d1a2ecef8b04519506adc6fab2fc426c966d3a24352f577706e0e"} Jan 23 17:24:52 crc kubenswrapper[4606]: I0123 17:24:52.019430 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" podStartSLOduration=1.507701652 podStartE2EDuration="2.019409147s" podCreationTimestamp="2026-01-23 17:24:50 +0000 UTC" firstStartedPulling="2026-01-23 17:24:50.921971627 +0000 UTC m=+1924.922164033" lastFinishedPulling="2026-01-23 17:24:51.433679112 +0000 UTC m=+1925.433871528" observedRunningTime="2026-01-23 17:24:52.015997923 +0000 UTC m=+1926.016190329" watchObservedRunningTime="2026-01-23 17:24:52.019409147 +0000 UTC m=+1926.019601553" Jan 23 17:24:53 crc kubenswrapper[4606]: I0123 17:24:53.366481 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:24:53 crc kubenswrapper[4606]: E0123 17:24:53.367301 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:25:03 crc kubenswrapper[4606]: I0123 17:25:03.096776 4606 generic.go:334] "Generic (PLEG): container finished" podID="c1b5063e-ec21-4a81-b0d1-af09f9ac9019" containerID="3f01e0b39f4d1a2ecef8b04519506adc6fab2fc426c966d3a24352f577706e0e" exitCode=0 Jan 23 17:25:03 crc kubenswrapper[4606]: I0123 17:25:03.096886 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" event={"ID":"c1b5063e-ec21-4a81-b0d1-af09f9ac9019","Type":"ContainerDied","Data":"3f01e0b39f4d1a2ecef8b04519506adc6fab2fc426c966d3a24352f577706e0e"} Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.496561 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.670302 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-inventory\") pod \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.670726 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-ssh-key-openstack-edpm-ipam\") pod \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.670970 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq686\" (UniqueName: \"kubernetes.io/projected/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-kube-api-access-jq686\") pod \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\" (UID: \"c1b5063e-ec21-4a81-b0d1-af09f9ac9019\") " Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.676157 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-kube-api-access-jq686" (OuterVolumeSpecName: "kube-api-access-jq686") pod "c1b5063e-ec21-4a81-b0d1-af09f9ac9019" (UID: "c1b5063e-ec21-4a81-b0d1-af09f9ac9019"). InnerVolumeSpecName "kube-api-access-jq686". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.697963 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1b5063e-ec21-4a81-b0d1-af09f9ac9019" (UID: "c1b5063e-ec21-4a81-b0d1-af09f9ac9019"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.719176 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-inventory" (OuterVolumeSpecName: "inventory") pod "c1b5063e-ec21-4a81-b0d1-af09f9ac9019" (UID: "c1b5063e-ec21-4a81-b0d1-af09f9ac9019"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.773991 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq686\" (UniqueName: \"kubernetes.io/projected/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-kube-api-access-jq686\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.774215 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:04 crc kubenswrapper[4606]: I0123 17:25:04.774285 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1b5063e-ec21-4a81-b0d1-af09f9ac9019-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.117887 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" event={"ID":"c1b5063e-ec21-4a81-b0d1-af09f9ac9019","Type":"ContainerDied","Data":"758ce9985d4491c309a9d61dacb8dd3dd3b701586295a5e1408e401ee82047ce"} Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.118255 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="758ce9985d4491c309a9d61dacb8dd3dd3b701586295a5e1408e401ee82047ce" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.117987 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.223009 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv"] Jan 23 17:25:05 crc kubenswrapper[4606]: E0123 17:25:05.223454 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b5063e-ec21-4a81-b0d1-af09f9ac9019" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.223478 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b5063e-ec21-4a81-b0d1-af09f9ac9019" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.223697 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b5063e-ec21-4a81-b0d1-af09f9ac9019" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.224504 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.231903 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.232197 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.232975 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.232985 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv"] Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.233294 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.233526 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.233759 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.233915 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.234004 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384449 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384488 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384511 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmsmg\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-kube-api-access-jmsmg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384704 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384750 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384778 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384818 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384845 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384920 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384952 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.384991 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.385047 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.385251 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.385317 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.486698 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.486783 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.486839 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.486880 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.486910 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.486951 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.486973 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.487003 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.487045 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.487072 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.487093 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.487158 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.487182 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.487209 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmsmg\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-kube-api-access-jmsmg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.492568 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.493813 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.496038 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.496703 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.496824 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.497149 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.497505 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.497660 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.498958 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.502176 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.507716 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.508741 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.509770 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.525338 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmsmg\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-kube-api-access-jmsmg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:05 crc kubenswrapper[4606]: I0123 17:25:05.542209 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:06 crc kubenswrapper[4606]: I0123 17:25:06.056081 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv"] Jan 23 17:25:06 crc kubenswrapper[4606]: I0123 17:25:06.126547 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" event={"ID":"3aa4047d-cfe6-4348-8a11-c3688c8596b4","Type":"ContainerStarted","Data":"3b7b974434b47425cac27ec16f70a4c4ebcf2e7da7f2e1d23ee62656fc1aae28"} Jan 23 17:25:08 crc kubenswrapper[4606]: I0123 17:25:08.149190 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" event={"ID":"3aa4047d-cfe6-4348-8a11-c3688c8596b4","Type":"ContainerStarted","Data":"2c6df33c616a12249034c8223e1c0ccba567e73d29b8d725fbd9e88723b7ac10"} Jan 23 17:25:08 crc kubenswrapper[4606]: I0123 17:25:08.176509 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" podStartSLOduration=1.6752994989999999 podStartE2EDuration="3.176488097s" podCreationTimestamp="2026-01-23 17:25:05 +0000 UTC" firstStartedPulling="2026-01-23 17:25:06.065249701 +0000 UTC m=+1940.065442107" lastFinishedPulling="2026-01-23 17:25:07.566438299 +0000 UTC m=+1941.566630705" observedRunningTime="2026-01-23 17:25:08.168574904 +0000 UTC m=+1942.168767320" watchObservedRunningTime="2026-01-23 17:25:08.176488097 +0000 UTC m=+1942.176680503" Jan 23 17:25:08 crc kubenswrapper[4606]: I0123 17:25:08.372188 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:25:08 crc kubenswrapper[4606]: E0123 17:25:08.372698 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:25:21 crc kubenswrapper[4606]: I0123 17:25:21.366641 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:25:21 crc kubenswrapper[4606]: E0123 17:25:21.367755 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:25:35 crc kubenswrapper[4606]: I0123 17:25:35.365231 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:25:35 crc kubenswrapper[4606]: E0123 17:25:35.366152 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:25:50 crc kubenswrapper[4606]: I0123 17:25:50.366078 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:25:50 crc kubenswrapper[4606]: E0123 17:25:50.367911 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:25:51 crc kubenswrapper[4606]: I0123 17:25:51.527785 4606 generic.go:334] "Generic (PLEG): container finished" podID="3aa4047d-cfe6-4348-8a11-c3688c8596b4" containerID="2c6df33c616a12249034c8223e1c0ccba567e73d29b8d725fbd9e88723b7ac10" exitCode=0 Jan 23 17:25:51 crc kubenswrapper[4606]: I0123 17:25:51.528010 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" event={"ID":"3aa4047d-cfe6-4348-8a11-c3688c8596b4","Type":"ContainerDied","Data":"2c6df33c616a12249034c8223e1c0ccba567e73d29b8d725fbd9e88723b7ac10"} Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.019535 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073663 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-bootstrap-combined-ca-bundle\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073712 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-libvirt-combined-ca-bundle\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073766 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073794 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmsmg\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-kube-api-access-jmsmg\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073833 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073874 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ovn-combined-ca-bundle\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073910 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.073935 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.074003 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ssh-key-openstack-edpm-ipam\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.074055 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-inventory\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.074093 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-telemetry-combined-ca-bundle\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.074134 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-repo-setup-combined-ca-bundle\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.074175 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-neutron-metadata-combined-ca-bundle\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.074236 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-nova-combined-ca-bundle\") pod \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\" (UID: \"3aa4047d-cfe6-4348-8a11-c3688c8596b4\") " Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.080827 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.084750 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.085724 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.086209 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.087051 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-kube-api-access-jmsmg" (OuterVolumeSpecName: "kube-api-access-jmsmg") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "kube-api-access-jmsmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.087442 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.088497 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.090149 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.090346 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.090810 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.104545 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.106449 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.112028 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.130265 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-inventory" (OuterVolumeSpecName: "inventory") pod "3aa4047d-cfe6-4348-8a11-c3688c8596b4" (UID: "3aa4047d-cfe6-4348-8a11-c3688c8596b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177399 4606 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177465 4606 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177490 4606 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177509 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177521 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177532 4606 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177543 4606 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177556 4606 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177567 4606 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177578 4606 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177609 4606 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa4047d-cfe6-4348-8a11-c3688c8596b4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177623 4606 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177635 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmsmg\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-kube-api-access-jmsmg\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.177647 4606 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3aa4047d-cfe6-4348-8a11-c3688c8596b4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.545445 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" event={"ID":"3aa4047d-cfe6-4348-8a11-c3688c8596b4","Type":"ContainerDied","Data":"3b7b974434b47425cac27ec16f70a4c4ebcf2e7da7f2e1d23ee62656fc1aae28"} Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.545758 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b7b974434b47425cac27ec16f70a4c4ebcf2e7da7f2e1d23ee62656fc1aae28" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.545497 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.667703 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969"] Jan 23 17:25:53 crc kubenswrapper[4606]: E0123 17:25:53.668113 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa4047d-cfe6-4348-8a11-c3688c8596b4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.668127 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa4047d-cfe6-4348-8a11-c3688c8596b4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.668286 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa4047d-cfe6-4348-8a11-c3688c8596b4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.669094 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.678680 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.679031 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.679218 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.679397 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.679679 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.689764 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969"] Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.790251 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.790319 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.790350 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.790390 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbc4c\" (UniqueName: \"kubernetes.io/projected/638511df-28ff-4ac7-9e76-7b3ca9afb52c-kube-api-access-rbc4c\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.790422 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.891663 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbc4c\" (UniqueName: \"kubernetes.io/projected/638511df-28ff-4ac7-9e76-7b3ca9afb52c-kube-api-access-rbc4c\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.891770 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.891906 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.892004 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.892092 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.893080 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.902380 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.902807 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.903111 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.914199 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbc4c\" (UniqueName: \"kubernetes.io/projected/638511df-28ff-4ac7-9e76-7b3ca9afb52c-kube-api-access-rbc4c\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-c9969\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:53 crc kubenswrapper[4606]: I0123 17:25:53.988854 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:25:54 crc kubenswrapper[4606]: I0123 17:25:54.542700 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969"] Jan 23 17:25:55 crc kubenswrapper[4606]: I0123 17:25:55.562349 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" event={"ID":"638511df-28ff-4ac7-9e76-7b3ca9afb52c","Type":"ContainerStarted","Data":"1e6fc73138bc64ad71d8bc606399d4bf4dbd641589c2c2cda77a49da0e9be668"} Jan 23 17:25:55 crc kubenswrapper[4606]: I0123 17:25:55.562710 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" event={"ID":"638511df-28ff-4ac7-9e76-7b3ca9afb52c","Type":"ContainerStarted","Data":"02238a330749c39bc5118530e85b7b506499e5f83167898ec285fcef7522722f"} Jan 23 17:25:55 crc kubenswrapper[4606]: I0123 17:25:55.590890 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" podStartSLOduration=2.178972973 podStartE2EDuration="2.59086571s" podCreationTimestamp="2026-01-23 17:25:53 +0000 UTC" firstStartedPulling="2026-01-23 17:25:54.55683605 +0000 UTC m=+1988.557028456" lastFinishedPulling="2026-01-23 17:25:54.968728787 +0000 UTC m=+1988.968921193" observedRunningTime="2026-01-23 17:25:55.577274209 +0000 UTC m=+1989.577466625" watchObservedRunningTime="2026-01-23 17:25:55.59086571 +0000 UTC m=+1989.591058136" Jan 23 17:26:05 crc kubenswrapper[4606]: I0123 17:26:05.366380 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:26:06 crc kubenswrapper[4606]: I0123 17:26:06.662940 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"ca4287756c52165100cd31da3f9d80fd94d7fd43de64342f939f15aaf307b775"} Jan 23 17:27:13 crc kubenswrapper[4606]: I0123 17:27:13.260808 4606 generic.go:334] "Generic (PLEG): container finished" podID="638511df-28ff-4ac7-9e76-7b3ca9afb52c" containerID="1e6fc73138bc64ad71d8bc606399d4bf4dbd641589c2c2cda77a49da0e9be668" exitCode=0 Jan 23 17:27:13 crc kubenswrapper[4606]: I0123 17:27:13.260876 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" event={"ID":"638511df-28ff-4ac7-9e76-7b3ca9afb52c","Type":"ContainerDied","Data":"1e6fc73138bc64ad71d8bc606399d4bf4dbd641589c2c2cda77a49da0e9be668"} Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.751087 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.918546 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ssh-key-openstack-edpm-ipam\") pod \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.919548 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbc4c\" (UniqueName: \"kubernetes.io/projected/638511df-28ff-4ac7-9e76-7b3ca9afb52c-kube-api-access-rbc4c\") pod \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.919717 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-inventory\") pod \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.919821 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovncontroller-config-0\") pod \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.919866 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovn-combined-ca-bundle\") pod \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\" (UID: \"638511df-28ff-4ac7-9e76-7b3ca9afb52c\") " Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.925941 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638511df-28ff-4ac7-9e76-7b3ca9afb52c-kube-api-access-rbc4c" (OuterVolumeSpecName: "kube-api-access-rbc4c") pod "638511df-28ff-4ac7-9e76-7b3ca9afb52c" (UID: "638511df-28ff-4ac7-9e76-7b3ca9afb52c"). InnerVolumeSpecName "kube-api-access-rbc4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.927166 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "638511df-28ff-4ac7-9e76-7b3ca9afb52c" (UID: "638511df-28ff-4ac7-9e76-7b3ca9afb52c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.955736 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "638511df-28ff-4ac7-9e76-7b3ca9afb52c" (UID: "638511df-28ff-4ac7-9e76-7b3ca9afb52c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.955822 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "638511df-28ff-4ac7-9e76-7b3ca9afb52c" (UID: "638511df-28ff-4ac7-9e76-7b3ca9afb52c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:27:14 crc kubenswrapper[4606]: I0123 17:27:14.960032 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-inventory" (OuterVolumeSpecName: "inventory") pod "638511df-28ff-4ac7-9e76-7b3ca9afb52c" (UID: "638511df-28ff-4ac7-9e76-7b3ca9afb52c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.022503 4606 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.022766 4606 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.022826 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.022878 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbc4c\" (UniqueName: \"kubernetes.io/projected/638511df-28ff-4ac7-9e76-7b3ca9afb52c-kube-api-access-rbc4c\") on node \"crc\" DevicePath \"\"" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.022929 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/638511df-28ff-4ac7-9e76-7b3ca9afb52c-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.279178 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" event={"ID":"638511df-28ff-4ac7-9e76-7b3ca9afb52c","Type":"ContainerDied","Data":"02238a330749c39bc5118530e85b7b506499e5f83167898ec285fcef7522722f"} Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.279213 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02238a330749c39bc5118530e85b7b506499e5f83167898ec285fcef7522722f" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.279526 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-c9969" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.472471 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf"] Jan 23 17:27:15 crc kubenswrapper[4606]: E0123 17:27:15.472957 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638511df-28ff-4ac7-9e76-7b3ca9afb52c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.472976 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="638511df-28ff-4ac7-9e76-7b3ca9afb52c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.473191 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="638511df-28ff-4ac7-9e76-7b3ca9afb52c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.473890 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.491781 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.492044 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.492078 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.494075 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.494687 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.495417 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.500704 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf"] Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.636039 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.636419 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.636556 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.636738 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.636908 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwzm\" (UniqueName: \"kubernetes.io/projected/0b22e008-ecea-4e70-81cf-a7165562fd7e-kube-api-access-blwzm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.637080 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.738921 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.738984 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.739025 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.739069 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.739115 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blwzm\" (UniqueName: \"kubernetes.io/projected/0b22e008-ecea-4e70-81cf-a7165562fd7e-kube-api-access-blwzm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.739151 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.743481 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.743488 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.743659 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.744270 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.744420 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.757973 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blwzm\" (UniqueName: \"kubernetes.io/projected/0b22e008-ecea-4e70-81cf-a7165562fd7e-kube-api-access-blwzm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:15 crc kubenswrapper[4606]: I0123 17:27:15.792088 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:27:16 crc kubenswrapper[4606]: I0123 17:27:16.307247 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf"] Jan 23 17:27:17 crc kubenswrapper[4606]: I0123 17:27:17.296569 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" event={"ID":"0b22e008-ecea-4e70-81cf-a7165562fd7e","Type":"ContainerStarted","Data":"c2b16d0cf287f166fb5b69e3385a76120c13d94d3025079af1eb3d72d95acb4f"} Jan 23 17:27:17 crc kubenswrapper[4606]: I0123 17:27:17.297211 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" event={"ID":"0b22e008-ecea-4e70-81cf-a7165562fd7e","Type":"ContainerStarted","Data":"6cbefed4974f11dabfa2b24a8b6e7dc1f96d8f3b04895def80c3a5ab684b2e57"} Jan 23 17:27:17 crc kubenswrapper[4606]: I0123 17:27:17.319543 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" podStartSLOduration=1.824770655 podStartE2EDuration="2.319524786s" podCreationTimestamp="2026-01-23 17:27:15 +0000 UTC" firstStartedPulling="2026-01-23 17:27:16.316926933 +0000 UTC m=+2070.317119339" lastFinishedPulling="2026-01-23 17:27:16.811681064 +0000 UTC m=+2070.811873470" observedRunningTime="2026-01-23 17:27:17.312726049 +0000 UTC m=+2071.312918475" watchObservedRunningTime="2026-01-23 17:27:17.319524786 +0000 UTC m=+2071.319717192" Jan 23 17:28:16 crc kubenswrapper[4606]: I0123 17:28:16.805525 4606 generic.go:334] "Generic (PLEG): container finished" podID="0b22e008-ecea-4e70-81cf-a7165562fd7e" containerID="c2b16d0cf287f166fb5b69e3385a76120c13d94d3025079af1eb3d72d95acb4f" exitCode=0 Jan 23 17:28:16 crc kubenswrapper[4606]: I0123 17:28:16.805630 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" event={"ID":"0b22e008-ecea-4e70-81cf-a7165562fd7e","Type":"ContainerDied","Data":"c2b16d0cf287f166fb5b69e3385a76120c13d94d3025079af1eb3d72d95acb4f"} Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.273913 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.463132 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0b22e008-ecea-4e70-81cf-a7165562fd7e\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.463644 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-ssh-key-openstack-edpm-ipam\") pod \"0b22e008-ecea-4e70-81cf-a7165562fd7e\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.463880 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-nova-metadata-neutron-config-0\") pod \"0b22e008-ecea-4e70-81cf-a7165562fd7e\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.464485 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-metadata-combined-ca-bundle\") pod \"0b22e008-ecea-4e70-81cf-a7165562fd7e\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.464661 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blwzm\" (UniqueName: \"kubernetes.io/projected/0b22e008-ecea-4e70-81cf-a7165562fd7e-kube-api-access-blwzm\") pod \"0b22e008-ecea-4e70-81cf-a7165562fd7e\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.464823 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-inventory\") pod \"0b22e008-ecea-4e70-81cf-a7165562fd7e\" (UID: \"0b22e008-ecea-4e70-81cf-a7165562fd7e\") " Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.470942 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0b22e008-ecea-4e70-81cf-a7165562fd7e" (UID: "0b22e008-ecea-4e70-81cf-a7165562fd7e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.472959 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b22e008-ecea-4e70-81cf-a7165562fd7e-kube-api-access-blwzm" (OuterVolumeSpecName: "kube-api-access-blwzm") pod "0b22e008-ecea-4e70-81cf-a7165562fd7e" (UID: "0b22e008-ecea-4e70-81cf-a7165562fd7e"). InnerVolumeSpecName "kube-api-access-blwzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.492415 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0b22e008-ecea-4e70-81cf-a7165562fd7e" (UID: "0b22e008-ecea-4e70-81cf-a7165562fd7e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.493333 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0b22e008-ecea-4e70-81cf-a7165562fd7e" (UID: "0b22e008-ecea-4e70-81cf-a7165562fd7e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.494075 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-inventory" (OuterVolumeSpecName: "inventory") pod "0b22e008-ecea-4e70-81cf-a7165562fd7e" (UID: "0b22e008-ecea-4e70-81cf-a7165562fd7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.497060 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b22e008-ecea-4e70-81cf-a7165562fd7e" (UID: "0b22e008-ecea-4e70-81cf-a7165562fd7e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.566481 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blwzm\" (UniqueName: \"kubernetes.io/projected/0b22e008-ecea-4e70-81cf-a7165562fd7e-kube-api-access-blwzm\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.566518 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.566529 4606 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.566548 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.566558 4606 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.566569 4606 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b22e008-ecea-4e70-81cf-a7165562fd7e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.824391 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" event={"ID":"0b22e008-ecea-4e70-81cf-a7165562fd7e","Type":"ContainerDied","Data":"6cbefed4974f11dabfa2b24a8b6e7dc1f96d8f3b04895def80c3a5ab684b2e57"} Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.824704 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cbefed4974f11dabfa2b24a8b6e7dc1f96d8f3b04895def80c3a5ab684b2e57" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.824462 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.911751 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm"] Jan 23 17:28:18 crc kubenswrapper[4606]: E0123 17:28:18.912147 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b22e008-ecea-4e70-81cf-a7165562fd7e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.912170 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b22e008-ecea-4e70-81cf-a7165562fd7e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.912414 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b22e008-ecea-4e70-81cf-a7165562fd7e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.914185 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.917471 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.917694 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.917827 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.920174 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.921853 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:28:18 crc kubenswrapper[4606]: I0123 17:28:18.939549 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm"] Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.076424 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4698\" (UniqueName: \"kubernetes.io/projected/59e9c932-3b43-4d74-b776-c34cfe6b2a78-kube-api-access-r4698\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.076501 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.076606 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.076644 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.076704 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.178971 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4698\" (UniqueName: \"kubernetes.io/projected/59e9c932-3b43-4d74-b776-c34cfe6b2a78-kube-api-access-r4698\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.179027 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.179066 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.179093 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.179135 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.183189 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.183338 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.183745 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.184643 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.200060 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4698\" (UniqueName: \"kubernetes.io/projected/59e9c932-3b43-4d74-b776-c34cfe6b2a78-kube-api-access-r4698\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.237989 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:28:19 crc kubenswrapper[4606]: W0123 17:28:19.763265 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59e9c932_3b43_4d74_b776_c34cfe6b2a78.slice/crio-43800d8432c2c9e084e2b4d4b1ce99345ee3fe502678cda4d837835e86da9d72 WatchSource:0}: Error finding container 43800d8432c2c9e084e2b4d4b1ce99345ee3fe502678cda4d837835e86da9d72: Status 404 returned error can't find the container with id 43800d8432c2c9e084e2b4d4b1ce99345ee3fe502678cda4d837835e86da9d72 Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.763848 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm"] Jan 23 17:28:19 crc kubenswrapper[4606]: I0123 17:28:19.835384 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" event={"ID":"59e9c932-3b43-4d74-b776-c34cfe6b2a78","Type":"ContainerStarted","Data":"43800d8432c2c9e084e2b4d4b1ce99345ee3fe502678cda4d837835e86da9d72"} Jan 23 17:28:21 crc kubenswrapper[4606]: I0123 17:28:21.862157 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" event={"ID":"59e9c932-3b43-4d74-b776-c34cfe6b2a78","Type":"ContainerStarted","Data":"12f5a36e0531b01d08a8929980f3acbffd5fd6e8f5358506a3416c3344077045"} Jan 23 17:28:21 crc kubenswrapper[4606]: I0123 17:28:21.885309 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" podStartSLOduration=2.911065307 podStartE2EDuration="3.885293418s" podCreationTimestamp="2026-01-23 17:28:18 +0000 UTC" firstStartedPulling="2026-01-23 17:28:19.76875 +0000 UTC m=+2133.768942406" lastFinishedPulling="2026-01-23 17:28:20.742978111 +0000 UTC m=+2134.743170517" observedRunningTime="2026-01-23 17:28:21.883769321 +0000 UTC m=+2135.883961727" watchObservedRunningTime="2026-01-23 17:28:21.885293418 +0000 UTC m=+2135.885485824" Jan 23 17:28:29 crc kubenswrapper[4606]: I0123 17:28:29.934407 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:28:29 crc kubenswrapper[4606]: I0123 17:28:29.935092 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:28:59 crc kubenswrapper[4606]: I0123 17:28:59.934614 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:28:59 crc kubenswrapper[4606]: I0123 17:28:59.935126 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:29:29 crc kubenswrapper[4606]: I0123 17:29:29.933977 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:29:29 crc kubenswrapper[4606]: I0123 17:29:29.934765 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:29:29 crc kubenswrapper[4606]: I0123 17:29:29.934839 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:29:29 crc kubenswrapper[4606]: I0123 17:29:29.935900 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca4287756c52165100cd31da3f9d80fd94d7fd43de64342f939f15aaf307b775"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:29:29 crc kubenswrapper[4606]: I0123 17:29:29.935984 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://ca4287756c52165100cd31da3f9d80fd94d7fd43de64342f939f15aaf307b775" gracePeriod=600 Jan 23 17:29:30 crc kubenswrapper[4606]: I0123 17:29:30.521652 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="ca4287756c52165100cd31da3f9d80fd94d7fd43de64342f939f15aaf307b775" exitCode=0 Jan 23 17:29:30 crc kubenswrapper[4606]: I0123 17:29:30.522193 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"ca4287756c52165100cd31da3f9d80fd94d7fd43de64342f939f15aaf307b775"} Jan 23 17:29:30 crc kubenswrapper[4606]: I0123 17:29:30.522282 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258"} Jan 23 17:29:30 crc kubenswrapper[4606]: I0123 17:29:30.522355 4606 scope.go:117] "RemoveContainer" containerID="7b51d444863ebb98e0cb740e7f913e5f2b808074bf1adeb1ef35356e37050c12" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.407481 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hvgkd"] Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.410854 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.419666 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvgkd"] Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.603674 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-catalog-content\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.604672 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fwb8\" (UniqueName: \"kubernetes.io/projected/8ec13bfe-bdce-495b-8be6-9269d02f058b-kube-api-access-5fwb8\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.604728 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-utilities\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.706895 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fwb8\" (UniqueName: \"kubernetes.io/projected/8ec13bfe-bdce-495b-8be6-9269d02f058b-kube-api-access-5fwb8\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.707256 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-utilities\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.707300 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-catalog-content\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.707862 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-utilities\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.707901 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-catalog-content\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.725828 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fwb8\" (UniqueName: \"kubernetes.io/projected/8ec13bfe-bdce-495b-8be6-9269d02f058b-kube-api-access-5fwb8\") pod \"redhat-operators-hvgkd\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:48 crc kubenswrapper[4606]: I0123 17:29:48.736851 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:29:49 crc kubenswrapper[4606]: I0123 17:29:49.309056 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvgkd"] Jan 23 17:29:49 crc kubenswrapper[4606]: I0123 17:29:49.689988 4606 generic.go:334] "Generic (PLEG): container finished" podID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerID="4929362670ff859f1305171581b6c8f327a2ad3201fba6d86db4a445d14860ce" exitCode=0 Jan 23 17:29:49 crc kubenswrapper[4606]: I0123 17:29:49.690185 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvgkd" event={"ID":"8ec13bfe-bdce-495b-8be6-9269d02f058b","Type":"ContainerDied","Data":"4929362670ff859f1305171581b6c8f327a2ad3201fba6d86db4a445d14860ce"} Jan 23 17:29:49 crc kubenswrapper[4606]: I0123 17:29:49.690218 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvgkd" event={"ID":"8ec13bfe-bdce-495b-8be6-9269d02f058b","Type":"ContainerStarted","Data":"e2eba12eb6c45f299ed23886c1c14219a9665048ad3af709cf00e08f79ca5932"} Jan 23 17:29:49 crc kubenswrapper[4606]: I0123 17:29:49.692381 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:29:50 crc kubenswrapper[4606]: I0123 17:29:50.703844 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvgkd" event={"ID":"8ec13bfe-bdce-495b-8be6-9269d02f058b","Type":"ContainerStarted","Data":"6bd41e3848a02dcd168cc24c5a2496cbaedc8e12934ff9e52e20f3e1720db312"} Jan 23 17:29:53 crc kubenswrapper[4606]: I0123 17:29:53.979090 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-95dw5"] Jan 23 17:29:53 crc kubenswrapper[4606]: I0123 17:29:53.983016 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.013743 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-utilities\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.013825 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djcvl\" (UniqueName: \"kubernetes.io/projected/916df325-045b-4100-bea6-698b1fd85522-kube-api-access-djcvl\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.013858 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-catalog-content\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.018131 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95dw5"] Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.115121 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-utilities\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.115193 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djcvl\" (UniqueName: \"kubernetes.io/projected/916df325-045b-4100-bea6-698b1fd85522-kube-api-access-djcvl\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.115217 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-catalog-content\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.115697 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-utilities\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.115765 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-catalog-content\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.617162 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="965dca6f-f73d-4ee0-a511-91793448cc67" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 17:29:54 crc kubenswrapper[4606]: I0123 17:29:54.617533 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="965dca6f-f73d-4ee0-a511-91793448cc67" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 17:29:55 crc kubenswrapper[4606]: I0123 17:29:55.324095 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djcvl\" (UniqueName: \"kubernetes.io/projected/916df325-045b-4100-bea6-698b1fd85522-kube-api-access-djcvl\") pod \"community-operators-95dw5\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:55 crc kubenswrapper[4606]: I0123 17:29:55.511429 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:29:56 crc kubenswrapper[4606]: I0123 17:29:56.017012 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95dw5"] Jan 23 17:29:56 crc kubenswrapper[4606]: I0123 17:29:56.757970 4606 generic.go:334] "Generic (PLEG): container finished" podID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerID="6bd41e3848a02dcd168cc24c5a2496cbaedc8e12934ff9e52e20f3e1720db312" exitCode=0 Jan 23 17:29:56 crc kubenswrapper[4606]: I0123 17:29:56.758058 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvgkd" event={"ID":"8ec13bfe-bdce-495b-8be6-9269d02f058b","Type":"ContainerDied","Data":"6bd41e3848a02dcd168cc24c5a2496cbaedc8e12934ff9e52e20f3e1720db312"} Jan 23 17:29:56 crc kubenswrapper[4606]: I0123 17:29:56.759631 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95dw5" event={"ID":"916df325-045b-4100-bea6-698b1fd85522","Type":"ContainerStarted","Data":"652210faa23d430f23059632c28837106d42a80742119ebdac5a3cc380210fc2"} Jan 23 17:29:58 crc kubenswrapper[4606]: I0123 17:29:58.779205 4606 generic.go:334] "Generic (PLEG): container finished" podID="916df325-045b-4100-bea6-698b1fd85522" containerID="9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b" exitCode=0 Jan 23 17:29:58 crc kubenswrapper[4606]: I0123 17:29:58.779272 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95dw5" event={"ID":"916df325-045b-4100-bea6-698b1fd85522","Type":"ContainerDied","Data":"9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b"} Jan 23 17:29:59 crc kubenswrapper[4606]: I0123 17:29:59.794390 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvgkd" event={"ID":"8ec13bfe-bdce-495b-8be6-9269d02f058b","Type":"ContainerStarted","Data":"1e7f24a63bce23dc885ead5faf43e9f19f6bd211ef5d0acb3f8484dc49afcbbf"} Jan 23 17:29:59 crc kubenswrapper[4606]: I0123 17:29:59.818626 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hvgkd" podStartSLOduration=2.595375951 podStartE2EDuration="11.818606183s" podCreationTimestamp="2026-01-23 17:29:48 +0000 UTC" firstStartedPulling="2026-01-23 17:29:49.692177896 +0000 UTC m=+2223.692370302" lastFinishedPulling="2026-01-23 17:29:58.915408128 +0000 UTC m=+2232.915600534" observedRunningTime="2026-01-23 17:29:59.813663033 +0000 UTC m=+2233.813855439" watchObservedRunningTime="2026-01-23 17:29:59.818606183 +0000 UTC m=+2233.818798609" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.159432 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j"] Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.160871 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.163115 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.163670 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.179397 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j"] Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.249913 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6242\" (UniqueName: \"kubernetes.io/projected/16c44897-72c2-4d28-929c-1505fe503c88-kube-api-access-b6242\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.249959 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c44897-72c2-4d28-929c-1505fe503c88-config-volume\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.250366 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c44897-72c2-4d28-929c-1505fe503c88-secret-volume\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.352886 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c44897-72c2-4d28-929c-1505fe503c88-secret-volume\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.353023 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6242\" (UniqueName: \"kubernetes.io/projected/16c44897-72c2-4d28-929c-1505fe503c88-kube-api-access-b6242\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.353053 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c44897-72c2-4d28-929c-1505fe503c88-config-volume\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.353983 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c44897-72c2-4d28-929c-1505fe503c88-config-volume\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.358479 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c44897-72c2-4d28-929c-1505fe503c88-secret-volume\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.381775 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6242\" (UniqueName: \"kubernetes.io/projected/16c44897-72c2-4d28-929c-1505fe503c88-kube-api-access-b6242\") pod \"collect-profiles-29486490-xmm4j\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:00 crc kubenswrapper[4606]: I0123 17:30:00.482071 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:01 crc kubenswrapper[4606]: I0123 17:30:01.639729 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j"] Jan 23 17:30:01 crc kubenswrapper[4606]: I0123 17:30:01.815558 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" event={"ID":"16c44897-72c2-4d28-929c-1505fe503c88","Type":"ContainerStarted","Data":"a2ebd0377570be2f5c8cdaf00f6caa629d49e437893c6b7aca473550bfdb9fae"} Jan 23 17:30:01 crc kubenswrapper[4606]: I0123 17:30:01.818131 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95dw5" event={"ID":"916df325-045b-4100-bea6-698b1fd85522","Type":"ContainerStarted","Data":"f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977"} Jan 23 17:30:02 crc kubenswrapper[4606]: I0123 17:30:02.842530 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" event={"ID":"16c44897-72c2-4d28-929c-1505fe503c88","Type":"ContainerStarted","Data":"92bb27ba4501158f9e1576e1259319a843cee1252f32a1814c18f942b05140ba"} Jan 23 17:30:02 crc kubenswrapper[4606]: I0123 17:30:02.863477 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" podStartSLOduration=2.863454905 podStartE2EDuration="2.863454905s" podCreationTimestamp="2026-01-23 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:30:02.853981835 +0000 UTC m=+2236.854174261" watchObservedRunningTime="2026-01-23 17:30:02.863454905 +0000 UTC m=+2236.863647321" Jan 23 17:30:03 crc kubenswrapper[4606]: I0123 17:30:03.850646 4606 generic.go:334] "Generic (PLEG): container finished" podID="16c44897-72c2-4d28-929c-1505fe503c88" containerID="92bb27ba4501158f9e1576e1259319a843cee1252f32a1814c18f942b05140ba" exitCode=0 Jan 23 17:30:03 crc kubenswrapper[4606]: I0123 17:30:03.850689 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" event={"ID":"16c44897-72c2-4d28-929c-1505fe503c88","Type":"ContainerDied","Data":"92bb27ba4501158f9e1576e1259319a843cee1252f32a1814c18f942b05140ba"} Jan 23 17:30:04 crc kubenswrapper[4606]: I0123 17:30:04.861930 4606 generic.go:334] "Generic (PLEG): container finished" podID="916df325-045b-4100-bea6-698b1fd85522" containerID="f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977" exitCode=0 Jan 23 17:30:04 crc kubenswrapper[4606]: I0123 17:30:04.862379 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95dw5" event={"ID":"916df325-045b-4100-bea6-698b1fd85522","Type":"ContainerDied","Data":"f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977"} Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.256544 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.355666 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6242\" (UniqueName: \"kubernetes.io/projected/16c44897-72c2-4d28-929c-1505fe503c88-kube-api-access-b6242\") pod \"16c44897-72c2-4d28-929c-1505fe503c88\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.355788 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c44897-72c2-4d28-929c-1505fe503c88-secret-volume\") pod \"16c44897-72c2-4d28-929c-1505fe503c88\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.355957 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c44897-72c2-4d28-929c-1505fe503c88-config-volume\") pod \"16c44897-72c2-4d28-929c-1505fe503c88\" (UID: \"16c44897-72c2-4d28-929c-1505fe503c88\") " Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.357106 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c44897-72c2-4d28-929c-1505fe503c88-config-volume" (OuterVolumeSpecName: "config-volume") pod "16c44897-72c2-4d28-929c-1505fe503c88" (UID: "16c44897-72c2-4d28-929c-1505fe503c88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.362712 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c44897-72c2-4d28-929c-1505fe503c88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16c44897-72c2-4d28-929c-1505fe503c88" (UID: "16c44897-72c2-4d28-929c-1505fe503c88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.366647 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c44897-72c2-4d28-929c-1505fe503c88-kube-api-access-b6242" (OuterVolumeSpecName: "kube-api-access-b6242") pod "16c44897-72c2-4d28-929c-1505fe503c88" (UID: "16c44897-72c2-4d28-929c-1505fe503c88"). InnerVolumeSpecName "kube-api-access-b6242". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.458547 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c44897-72c2-4d28-929c-1505fe503c88-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.458906 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6242\" (UniqueName: \"kubernetes.io/projected/16c44897-72c2-4d28-929c-1505fe503c88-kube-api-access-b6242\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.458968 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16c44897-72c2-4d28-929c-1505fe503c88-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.872388 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" event={"ID":"16c44897-72c2-4d28-929c-1505fe503c88","Type":"ContainerDied","Data":"a2ebd0377570be2f5c8cdaf00f6caa629d49e437893c6b7aca473550bfdb9fae"} Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.872423 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ebd0377570be2f5c8cdaf00f6caa629d49e437893c6b7aca473550bfdb9fae" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.872455 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j" Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.941331 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2"] Jan 23 17:30:05 crc kubenswrapper[4606]: I0123 17:30:05.949111 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486445-rsdm2"] Jan 23 17:30:06 crc kubenswrapper[4606]: I0123 17:30:06.406774 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559500a2-4399-42d0-832b-719429dd6fd9" path="/var/lib/kubelet/pods/559500a2-4399-42d0-832b-719429dd6fd9/volumes" Jan 23 17:30:08 crc kubenswrapper[4606]: I0123 17:30:08.737055 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:30:08 crc kubenswrapper[4606]: I0123 17:30:08.737807 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:30:08 crc kubenswrapper[4606]: I0123 17:30:08.906043 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95dw5" event={"ID":"916df325-045b-4100-bea6-698b1fd85522","Type":"ContainerStarted","Data":"380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9"} Jan 23 17:30:08 crc kubenswrapper[4606]: I0123 17:30:08.942799 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-95dw5" podStartSLOduration=6.820363786 podStartE2EDuration="15.942774468s" podCreationTimestamp="2026-01-23 17:29:53 +0000 UTC" firstStartedPulling="2026-01-23 17:29:58.914538568 +0000 UTC m=+2232.914730974" lastFinishedPulling="2026-01-23 17:30:08.03694925 +0000 UTC m=+2242.037141656" observedRunningTime="2026-01-23 17:30:08.930033268 +0000 UTC m=+2242.930225704" watchObservedRunningTime="2026-01-23 17:30:08.942774468 +0000 UTC m=+2242.942966894" Jan 23 17:30:09 crc kubenswrapper[4606]: I0123 17:30:09.796014 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hvgkd" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="registry-server" probeResult="failure" output=< Jan 23 17:30:09 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 17:30:09 crc kubenswrapper[4606]: > Jan 23 17:30:15 crc kubenswrapper[4606]: I0123 17:30:15.512370 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:30:15 crc kubenswrapper[4606]: I0123 17:30:15.513897 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:30:15 crc kubenswrapper[4606]: I0123 17:30:15.598288 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:30:16 crc kubenswrapper[4606]: I0123 17:30:16.017036 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:30:16 crc kubenswrapper[4606]: I0123 17:30:16.066602 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95dw5"] Jan 23 17:30:17 crc kubenswrapper[4606]: I0123 17:30:17.976174 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-95dw5" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="registry-server" containerID="cri-o://380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9" gracePeriod=2 Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.440797 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.532558 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-utilities\") pod \"916df325-045b-4100-bea6-698b1fd85522\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.532639 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djcvl\" (UniqueName: \"kubernetes.io/projected/916df325-045b-4100-bea6-698b1fd85522-kube-api-access-djcvl\") pod \"916df325-045b-4100-bea6-698b1fd85522\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.532729 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-catalog-content\") pod \"916df325-045b-4100-bea6-698b1fd85522\" (UID: \"916df325-045b-4100-bea6-698b1fd85522\") " Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.533875 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-utilities" (OuterVolumeSpecName: "utilities") pod "916df325-045b-4100-bea6-698b1fd85522" (UID: "916df325-045b-4100-bea6-698b1fd85522"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.538478 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916df325-045b-4100-bea6-698b1fd85522-kube-api-access-djcvl" (OuterVolumeSpecName: "kube-api-access-djcvl") pod "916df325-045b-4100-bea6-698b1fd85522" (UID: "916df325-045b-4100-bea6-698b1fd85522"). InnerVolumeSpecName "kube-api-access-djcvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.582363 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "916df325-045b-4100-bea6-698b1fd85522" (UID: "916df325-045b-4100-bea6-698b1fd85522"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.634172 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.634202 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djcvl\" (UniqueName: \"kubernetes.io/projected/916df325-045b-4100-bea6-698b1fd85522-kube-api-access-djcvl\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.634213 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916df325-045b-4100-bea6-698b1fd85522-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.784074 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.835016 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.986429 4606 generic.go:334] "Generic (PLEG): container finished" podID="916df325-045b-4100-bea6-698b1fd85522" containerID="380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9" exitCode=0 Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.986527 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95dw5" event={"ID":"916df325-045b-4100-bea6-698b1fd85522","Type":"ContainerDied","Data":"380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9"} Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.986642 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95dw5" event={"ID":"916df325-045b-4100-bea6-698b1fd85522","Type":"ContainerDied","Data":"652210faa23d430f23059632c28837106d42a80742119ebdac5a3cc380210fc2"} Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.986667 4606 scope.go:117] "RemoveContainer" containerID="380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9" Jan 23 17:30:18 crc kubenswrapper[4606]: I0123 17:30:18.988537 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95dw5" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.032909 4606 scope.go:117] "RemoveContainer" containerID="f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.033328 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95dw5"] Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.046321 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-95dw5"] Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.061334 4606 scope.go:117] "RemoveContainer" containerID="9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.100845 4606 scope.go:117] "RemoveContainer" containerID="380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9" Jan 23 17:30:19 crc kubenswrapper[4606]: E0123 17:30:19.101363 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9\": container with ID starting with 380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9 not found: ID does not exist" containerID="380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.101414 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9"} err="failed to get container status \"380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9\": rpc error: code = NotFound desc = could not find container \"380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9\": container with ID starting with 380684d4aae0e1bae009aea68a093778593b5aa43a811d8d22625bf8a3a9d8e9 not found: ID does not exist" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.101463 4606 scope.go:117] "RemoveContainer" containerID="f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977" Jan 23 17:30:19 crc kubenswrapper[4606]: E0123 17:30:19.101905 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977\": container with ID starting with f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977 not found: ID does not exist" containerID="f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.101951 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977"} err="failed to get container status \"f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977\": rpc error: code = NotFound desc = could not find container \"f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977\": container with ID starting with f78912019b18f7a90d54be3cc9230413c6248a6429d83134bba89feb81bfc977 not found: ID does not exist" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.101984 4606 scope.go:117] "RemoveContainer" containerID="9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b" Jan 23 17:30:19 crc kubenswrapper[4606]: E0123 17:30:19.102372 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b\": container with ID starting with 9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b not found: ID does not exist" containerID="9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b" Jan 23 17:30:19 crc kubenswrapper[4606]: I0123 17:30:19.102413 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b"} err="failed to get container status \"9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b\": rpc error: code = NotFound desc = could not find container \"9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b\": container with ID starting with 9b89d13104ef10674c1ff232ea74fa5c2810e53c72b169c8bab0a4e2d403f33b not found: ID does not exist" Jan 23 17:30:20 crc kubenswrapper[4606]: I0123 17:30:20.378314 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916df325-045b-4100-bea6-698b1fd85522" path="/var/lib/kubelet/pods/916df325-045b-4100-bea6-698b1fd85522/volumes" Jan 23 17:30:21 crc kubenswrapper[4606]: I0123 17:30:21.040489 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hvgkd"] Jan 23 17:30:21 crc kubenswrapper[4606]: I0123 17:30:21.041084 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hvgkd" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="registry-server" containerID="cri-o://1e7f24a63bce23dc885ead5faf43e9f19f6bd211ef5d0acb3f8484dc49afcbbf" gracePeriod=2 Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.019716 4606 generic.go:334] "Generic (PLEG): container finished" podID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerID="1e7f24a63bce23dc885ead5faf43e9f19f6bd211ef5d0acb3f8484dc49afcbbf" exitCode=0 Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.019787 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvgkd" event={"ID":"8ec13bfe-bdce-495b-8be6-9269d02f058b","Type":"ContainerDied","Data":"1e7f24a63bce23dc885ead5faf43e9f19f6bd211ef5d0acb3f8484dc49afcbbf"} Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.019989 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvgkd" event={"ID":"8ec13bfe-bdce-495b-8be6-9269d02f058b","Type":"ContainerDied","Data":"e2eba12eb6c45f299ed23886c1c14219a9665048ad3af709cf00e08f79ca5932"} Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.020001 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2eba12eb6c45f299ed23886c1c14219a9665048ad3af709cf00e08f79ca5932" Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.046022 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.199405 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-catalog-content\") pod \"8ec13bfe-bdce-495b-8be6-9269d02f058b\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.200187 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fwb8\" (UniqueName: \"kubernetes.io/projected/8ec13bfe-bdce-495b-8be6-9269d02f058b-kube-api-access-5fwb8\") pod \"8ec13bfe-bdce-495b-8be6-9269d02f058b\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.200311 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-utilities\") pod \"8ec13bfe-bdce-495b-8be6-9269d02f058b\" (UID: \"8ec13bfe-bdce-495b-8be6-9269d02f058b\") " Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.200898 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-utilities" (OuterVolumeSpecName: "utilities") pod "8ec13bfe-bdce-495b-8be6-9269d02f058b" (UID: "8ec13bfe-bdce-495b-8be6-9269d02f058b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.212217 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec13bfe-bdce-495b-8be6-9269d02f058b-kube-api-access-5fwb8" (OuterVolumeSpecName: "kube-api-access-5fwb8") pod "8ec13bfe-bdce-495b-8be6-9269d02f058b" (UID: "8ec13bfe-bdce-495b-8be6-9269d02f058b"). InnerVolumeSpecName "kube-api-access-5fwb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.302409 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fwb8\" (UniqueName: \"kubernetes.io/projected/8ec13bfe-bdce-495b-8be6-9269d02f058b-kube-api-access-5fwb8\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.302444 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.325984 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ec13bfe-bdce-495b-8be6-9269d02f058b" (UID: "8ec13bfe-bdce-495b-8be6-9269d02f058b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:30:22 crc kubenswrapper[4606]: I0123 17:30:22.404860 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec13bfe-bdce-495b-8be6-9269d02f058b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:30:23 crc kubenswrapper[4606]: I0123 17:30:23.028317 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvgkd" Jan 23 17:30:23 crc kubenswrapper[4606]: I0123 17:30:23.049349 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hvgkd"] Jan 23 17:30:23 crc kubenswrapper[4606]: I0123 17:30:23.056036 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hvgkd"] Jan 23 17:30:24 crc kubenswrapper[4606]: I0123 17:30:24.380144 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" path="/var/lib/kubelet/pods/8ec13bfe-bdce-495b-8be6-9269d02f058b/volumes" Jan 23 17:31:02 crc kubenswrapper[4606]: I0123 17:31:02.737847 4606 scope.go:117] "RemoveContainer" containerID="6502ed4d3859f78d0d11a9d95677491906bf67dc59622bd993c57a31a2293cce" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.326898 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hwbcz"] Jan 23 17:31:49 crc kubenswrapper[4606]: E0123 17:31:49.327928 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="extract-content" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.327947 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="extract-content" Jan 23 17:31:49 crc kubenswrapper[4606]: E0123 17:31:49.327970 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="extract-utilities" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.327978 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="extract-utilities" Jan 23 17:31:49 crc kubenswrapper[4606]: E0123 17:31:49.327991 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c44897-72c2-4d28-929c-1505fe503c88" containerName="collect-profiles" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328001 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c44897-72c2-4d28-929c-1505fe503c88" containerName="collect-profiles" Jan 23 17:31:49 crc kubenswrapper[4606]: E0123 17:31:49.328015 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="registry-server" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328022 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="registry-server" Jan 23 17:31:49 crc kubenswrapper[4606]: E0123 17:31:49.328043 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="registry-server" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328052 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="registry-server" Jan 23 17:31:49 crc kubenswrapper[4606]: E0123 17:31:49.328066 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="extract-content" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328075 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="extract-content" Jan 23 17:31:49 crc kubenswrapper[4606]: E0123 17:31:49.328083 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="extract-utilities" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328089 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="extract-utilities" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328294 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="916df325-045b-4100-bea6-698b1fd85522" containerName="registry-server" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328311 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c44897-72c2-4d28-929c-1505fe503c88" containerName="collect-profiles" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.328323 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec13bfe-bdce-495b-8be6-9269d02f058b" containerName="registry-server" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.330146 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.339319 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwbcz"] Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.457434 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-utilities\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.457484 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-catalog-content\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.457620 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwftw\" (UniqueName: \"kubernetes.io/projected/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-kube-api-access-mwftw\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.559224 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-utilities\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.559612 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-catalog-content\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.559796 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwftw\" (UniqueName: \"kubernetes.io/projected/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-kube-api-access-mwftw\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.559979 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-catalog-content\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.559811 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-utilities\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.580420 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwftw\" (UniqueName: \"kubernetes.io/projected/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-kube-api-access-mwftw\") pod \"redhat-marketplace-hwbcz\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:49 crc kubenswrapper[4606]: I0123 17:31:49.651972 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:50 crc kubenswrapper[4606]: I0123 17:31:50.135300 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwbcz"] Jan 23 17:31:51 crc kubenswrapper[4606]: I0123 17:31:51.074953 4606 generic.go:334] "Generic (PLEG): container finished" podID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerID="92f6266b5f69c6cc004f6c38b1dc21316d744ae557fc4eb62318a462571a5cc8" exitCode=0 Jan 23 17:31:51 crc kubenswrapper[4606]: I0123 17:31:51.075084 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwbcz" event={"ID":"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed","Type":"ContainerDied","Data":"92f6266b5f69c6cc004f6c38b1dc21316d744ae557fc4eb62318a462571a5cc8"} Jan 23 17:31:51 crc kubenswrapper[4606]: I0123 17:31:51.075602 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwbcz" event={"ID":"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed","Type":"ContainerStarted","Data":"6cc2181812d673dd693bcdb948f3e73de67d02b8ed8b07a2c8264a822fc17165"} Jan 23 17:31:53 crc kubenswrapper[4606]: I0123 17:31:53.094538 4606 generic.go:334] "Generic (PLEG): container finished" podID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerID="5b1e10b6cafdc65c98e14b6b1f90af136e21a80eda72b14df17d6aeb7ebfca66" exitCode=0 Jan 23 17:31:53 crc kubenswrapper[4606]: I0123 17:31:53.094648 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwbcz" event={"ID":"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed","Type":"ContainerDied","Data":"5b1e10b6cafdc65c98e14b6b1f90af136e21a80eda72b14df17d6aeb7ebfca66"} Jan 23 17:31:54 crc kubenswrapper[4606]: I0123 17:31:54.105252 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwbcz" event={"ID":"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed","Type":"ContainerStarted","Data":"0e47d9a8cd3e40cf1cf9c8af73fe2996517f6c80a303f86745cf140bd618a1a5"} Jan 23 17:31:54 crc kubenswrapper[4606]: I0123 17:31:54.122680 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hwbcz" podStartSLOduration=2.672669774 podStartE2EDuration="5.122663307s" podCreationTimestamp="2026-01-23 17:31:49 +0000 UTC" firstStartedPulling="2026-01-23 17:31:51.079554306 +0000 UTC m=+2345.079746722" lastFinishedPulling="2026-01-23 17:31:53.529547849 +0000 UTC m=+2347.529740255" observedRunningTime="2026-01-23 17:31:54.1207613 +0000 UTC m=+2348.120953756" watchObservedRunningTime="2026-01-23 17:31:54.122663307 +0000 UTC m=+2348.122855713" Jan 23 17:31:59 crc kubenswrapper[4606]: I0123 17:31:59.652903 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:59 crc kubenswrapper[4606]: I0123 17:31:59.654008 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:59 crc kubenswrapper[4606]: I0123 17:31:59.717922 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:31:59 crc kubenswrapper[4606]: I0123 17:31:59.935806 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:31:59 crc kubenswrapper[4606]: I0123 17:31:59.935884 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:32:00 crc kubenswrapper[4606]: I0123 17:32:00.206354 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:32:00 crc kubenswrapper[4606]: I0123 17:32:00.262997 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwbcz"] Jan 23 17:32:02 crc kubenswrapper[4606]: I0123 17:32:02.169791 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hwbcz" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="registry-server" containerID="cri-o://0e47d9a8cd3e40cf1cf9c8af73fe2996517f6c80a303f86745cf140bd618a1a5" gracePeriod=2 Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.181913 4606 generic.go:334] "Generic (PLEG): container finished" podID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerID="0e47d9a8cd3e40cf1cf9c8af73fe2996517f6c80a303f86745cf140bd618a1a5" exitCode=0 Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.181957 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwbcz" event={"ID":"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed","Type":"ContainerDied","Data":"0e47d9a8cd3e40cf1cf9c8af73fe2996517f6c80a303f86745cf140bd618a1a5"} Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.182198 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwbcz" event={"ID":"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed","Type":"ContainerDied","Data":"6cc2181812d673dd693bcdb948f3e73de67d02b8ed8b07a2c8264a822fc17165"} Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.182213 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc2181812d673dd693bcdb948f3e73de67d02b8ed8b07a2c8264a822fc17165" Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.228561 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.362205 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-catalog-content\") pod \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.362661 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwftw\" (UniqueName: \"kubernetes.io/projected/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-kube-api-access-mwftw\") pod \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.362852 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-utilities\") pod \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\" (UID: \"e5ddf512-c1a3-4370-84c9-ee4a5c70cfed\") " Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.363954 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-utilities" (OuterVolumeSpecName: "utilities") pod "e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" (UID: "e5ddf512-c1a3-4370-84c9-ee4a5c70cfed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.374973 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-kube-api-access-mwftw" (OuterVolumeSpecName: "kube-api-access-mwftw") pod "e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" (UID: "e5ddf512-c1a3-4370-84c9-ee4a5c70cfed"). InnerVolumeSpecName "kube-api-access-mwftw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.393749 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" (UID: "e5ddf512-c1a3-4370-84c9-ee4a5c70cfed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.465533 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.465571 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwftw\" (UniqueName: \"kubernetes.io/projected/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-kube-api-access-mwftw\") on node \"crc\" DevicePath \"\"" Jan 23 17:32:03 crc kubenswrapper[4606]: I0123 17:32:03.465585 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:32:04 crc kubenswrapper[4606]: I0123 17:32:04.192066 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwbcz" Jan 23 17:32:04 crc kubenswrapper[4606]: I0123 17:32:04.232944 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwbcz"] Jan 23 17:32:04 crc kubenswrapper[4606]: I0123 17:32:04.243971 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwbcz"] Jan 23 17:32:04 crc kubenswrapper[4606]: I0123 17:32:04.385146 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" path="/var/lib/kubelet/pods/e5ddf512-c1a3-4370-84c9-ee4a5c70cfed/volumes" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.081661 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hhws5"] Jan 23 17:32:12 crc kubenswrapper[4606]: E0123 17:32:12.082708 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="extract-utilities" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.082727 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="extract-utilities" Jan 23 17:32:12 crc kubenswrapper[4606]: E0123 17:32:12.082750 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="registry-server" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.082759 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="registry-server" Jan 23 17:32:12 crc kubenswrapper[4606]: E0123 17:32:12.082769 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="extract-content" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.082777 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="extract-content" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.082993 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ddf512-c1a3-4370-84c9-ee4a5c70cfed" containerName="registry-server" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.084690 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.101510 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hhws5"] Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.134909 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj458\" (UniqueName: \"kubernetes.io/projected/78a6f2a7-5091-4917-8af2-ab915b9f649a-kube-api-access-kj458\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.134970 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-utilities\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.135045 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-catalog-content\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.236933 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj458\" (UniqueName: \"kubernetes.io/projected/78a6f2a7-5091-4917-8af2-ab915b9f649a-kube-api-access-kj458\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.236977 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-utilities\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.237003 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-catalog-content\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.237550 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-catalog-content\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.237642 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-utilities\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.270193 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj458\" (UniqueName: \"kubernetes.io/projected/78a6f2a7-5091-4917-8af2-ab915b9f649a-kube-api-access-kj458\") pod \"certified-operators-hhws5\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:12 crc kubenswrapper[4606]: I0123 17:32:12.416309 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:13 crc kubenswrapper[4606]: I0123 17:32:13.070771 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hhws5"] Jan 23 17:32:13 crc kubenswrapper[4606]: I0123 17:32:13.295131 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhws5" event={"ID":"78a6f2a7-5091-4917-8af2-ab915b9f649a","Type":"ContainerStarted","Data":"69800ef0bc5214b1ed81b5ca26c115a89a5c4cc23bce149c50f558041d09d83a"} Jan 23 17:32:14 crc kubenswrapper[4606]: I0123 17:32:14.305368 4606 generic.go:334] "Generic (PLEG): container finished" podID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerID="d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb" exitCode=0 Jan 23 17:32:14 crc kubenswrapper[4606]: I0123 17:32:14.305472 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhws5" event={"ID":"78a6f2a7-5091-4917-8af2-ab915b9f649a","Type":"ContainerDied","Data":"d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb"} Jan 23 17:32:17 crc kubenswrapper[4606]: I0123 17:32:17.350315 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhws5" event={"ID":"78a6f2a7-5091-4917-8af2-ab915b9f649a","Type":"ContainerStarted","Data":"8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf"} Jan 23 17:32:18 crc kubenswrapper[4606]: I0123 17:32:18.359835 4606 generic.go:334] "Generic (PLEG): container finished" podID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerID="8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf" exitCode=0 Jan 23 17:32:18 crc kubenswrapper[4606]: I0123 17:32:18.359936 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhws5" event={"ID":"78a6f2a7-5091-4917-8af2-ab915b9f649a","Type":"ContainerDied","Data":"8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf"} Jan 23 17:32:19 crc kubenswrapper[4606]: I0123 17:32:19.372578 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhws5" event={"ID":"78a6f2a7-5091-4917-8af2-ab915b9f649a","Type":"ContainerStarted","Data":"9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85"} Jan 23 17:32:19 crc kubenswrapper[4606]: I0123 17:32:19.399066 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hhws5" podStartSLOduration=2.735795517 podStartE2EDuration="7.399046129s" podCreationTimestamp="2026-01-23 17:32:12 +0000 UTC" firstStartedPulling="2026-01-23 17:32:14.307548888 +0000 UTC m=+2368.307741284" lastFinishedPulling="2026-01-23 17:32:18.97079948 +0000 UTC m=+2372.970991896" observedRunningTime="2026-01-23 17:32:19.391925576 +0000 UTC m=+2373.392117982" watchObservedRunningTime="2026-01-23 17:32:19.399046129 +0000 UTC m=+2373.399238535" Jan 23 17:32:22 crc kubenswrapper[4606]: I0123 17:32:22.417009 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:22 crc kubenswrapper[4606]: I0123 17:32:22.417339 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:22 crc kubenswrapper[4606]: I0123 17:32:22.478249 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:29 crc kubenswrapper[4606]: I0123 17:32:29.933881 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:32:29 crc kubenswrapper[4606]: I0123 17:32:29.935436 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:32:32 crc kubenswrapper[4606]: I0123 17:32:32.497549 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:32 crc kubenswrapper[4606]: I0123 17:32:32.566918 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hhws5"] Jan 23 17:32:33 crc kubenswrapper[4606]: I0123 17:32:33.498891 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hhws5" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="registry-server" containerID="cri-o://9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85" gracePeriod=2 Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.371216 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.473104 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-catalog-content\") pod \"78a6f2a7-5091-4917-8af2-ab915b9f649a\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.473291 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj458\" (UniqueName: \"kubernetes.io/projected/78a6f2a7-5091-4917-8af2-ab915b9f649a-kube-api-access-kj458\") pod \"78a6f2a7-5091-4917-8af2-ab915b9f649a\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.473327 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-utilities\") pod \"78a6f2a7-5091-4917-8af2-ab915b9f649a\" (UID: \"78a6f2a7-5091-4917-8af2-ab915b9f649a\") " Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.477714 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-utilities" (OuterVolumeSpecName: "utilities") pod "78a6f2a7-5091-4917-8af2-ab915b9f649a" (UID: "78a6f2a7-5091-4917-8af2-ab915b9f649a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.480850 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a6f2a7-5091-4917-8af2-ab915b9f649a-kube-api-access-kj458" (OuterVolumeSpecName: "kube-api-access-kj458") pod "78a6f2a7-5091-4917-8af2-ab915b9f649a" (UID: "78a6f2a7-5091-4917-8af2-ab915b9f649a"). InnerVolumeSpecName "kube-api-access-kj458". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.514824 4606 generic.go:334] "Generic (PLEG): container finished" podID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerID="9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85" exitCode=0 Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.514874 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhws5" event={"ID":"78a6f2a7-5091-4917-8af2-ab915b9f649a","Type":"ContainerDied","Data":"9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85"} Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.514905 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhws5" event={"ID":"78a6f2a7-5091-4917-8af2-ab915b9f649a","Type":"ContainerDied","Data":"69800ef0bc5214b1ed81b5ca26c115a89a5c4cc23bce149c50f558041d09d83a"} Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.514925 4606 scope.go:117] "RemoveContainer" containerID="9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.515028 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhws5" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.542169 4606 scope.go:117] "RemoveContainer" containerID="8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.543835 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78a6f2a7-5091-4917-8af2-ab915b9f649a" (UID: "78a6f2a7-5091-4917-8af2-ab915b9f649a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.565861 4606 scope.go:117] "RemoveContainer" containerID="d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.576105 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj458\" (UniqueName: \"kubernetes.io/projected/78a6f2a7-5091-4917-8af2-ab915b9f649a-kube-api-access-kj458\") on node \"crc\" DevicePath \"\"" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.576148 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.576161 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78a6f2a7-5091-4917-8af2-ab915b9f649a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.602852 4606 scope.go:117] "RemoveContainer" containerID="9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85" Jan 23 17:32:34 crc kubenswrapper[4606]: E0123 17:32:34.603320 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85\": container with ID starting with 9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85 not found: ID does not exist" containerID="9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.603357 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85"} err="failed to get container status \"9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85\": rpc error: code = NotFound desc = could not find container \"9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85\": container with ID starting with 9d17c19dd89cd020951da22361f852e1989497095ebe8744023611068ec10b85 not found: ID does not exist" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.603377 4606 scope.go:117] "RemoveContainer" containerID="8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf" Jan 23 17:32:34 crc kubenswrapper[4606]: E0123 17:32:34.603755 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf\": container with ID starting with 8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf not found: ID does not exist" containerID="8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.603776 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf"} err="failed to get container status \"8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf\": rpc error: code = NotFound desc = could not find container \"8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf\": container with ID starting with 8857283489b61245d32ce4e205b62f85eeb73995aead234747881576fe6cafaf not found: ID does not exist" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.603788 4606 scope.go:117] "RemoveContainer" containerID="d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb" Jan 23 17:32:34 crc kubenswrapper[4606]: E0123 17:32:34.603965 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb\": container with ID starting with d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb not found: ID does not exist" containerID="d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.603986 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb"} err="failed to get container status \"d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb\": rpc error: code = NotFound desc = could not find container \"d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb\": container with ID starting with d046389edd3da4380d9ed1e802a240c352a81e90663a73ca278194197c7328eb not found: ID does not exist" Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.847973 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hhws5"] Jan 23 17:32:34 crc kubenswrapper[4606]: I0123 17:32:34.858075 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hhws5"] Jan 23 17:32:36 crc kubenswrapper[4606]: I0123 17:32:36.377080 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" path="/var/lib/kubelet/pods/78a6f2a7-5091-4917-8af2-ab915b9f649a/volumes" Jan 23 17:32:59 crc kubenswrapper[4606]: I0123 17:32:59.933630 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:32:59 crc kubenswrapper[4606]: I0123 17:32:59.934805 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:32:59 crc kubenswrapper[4606]: I0123 17:32:59.934872 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:32:59 crc kubenswrapper[4606]: I0123 17:32:59.935748 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:32:59 crc kubenswrapper[4606]: I0123 17:32:59.935812 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" gracePeriod=600 Jan 23 17:33:00 crc kubenswrapper[4606]: E0123 17:33:00.059605 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:33:00 crc kubenswrapper[4606]: I0123 17:33:00.770142 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" exitCode=0 Jan 23 17:33:00 crc kubenswrapper[4606]: I0123 17:33:00.770201 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258"} Jan 23 17:33:00 crc kubenswrapper[4606]: I0123 17:33:00.770242 4606 scope.go:117] "RemoveContainer" containerID="ca4287756c52165100cd31da3f9d80fd94d7fd43de64342f939f15aaf307b775" Jan 23 17:33:00 crc kubenswrapper[4606]: I0123 17:33:00.770895 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:33:00 crc kubenswrapper[4606]: E0123 17:33:00.771293 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:33:14 crc kubenswrapper[4606]: I0123 17:33:14.367806 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:33:14 crc kubenswrapper[4606]: E0123 17:33:14.368857 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:33:25 crc kubenswrapper[4606]: I0123 17:33:25.365636 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:33:25 crc kubenswrapper[4606]: E0123 17:33:25.366550 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:33:36 crc kubenswrapper[4606]: I0123 17:33:36.372055 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:33:36 crc kubenswrapper[4606]: E0123 17:33:36.372909 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:33:48 crc kubenswrapper[4606]: I0123 17:33:48.366087 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:33:48 crc kubenswrapper[4606]: E0123 17:33:48.366541 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:34:01 crc kubenswrapper[4606]: I0123 17:34:01.366387 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:34:01 crc kubenswrapper[4606]: E0123 17:34:01.367660 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:34:05 crc kubenswrapper[4606]: I0123 17:34:05.432978 4606 generic.go:334] "Generic (PLEG): container finished" podID="59e9c932-3b43-4d74-b776-c34cfe6b2a78" containerID="12f5a36e0531b01d08a8929980f3acbffd5fd6e8f5358506a3416c3344077045" exitCode=0 Jan 23 17:34:05 crc kubenswrapper[4606]: I0123 17:34:05.433047 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" event={"ID":"59e9c932-3b43-4d74-b776-c34cfe6b2a78","Type":"ContainerDied","Data":"12f5a36e0531b01d08a8929980f3acbffd5fd6e8f5358506a3416c3344077045"} Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.902961 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.975745 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-secret-0\") pod \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.975871 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-combined-ca-bundle\") pod \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.975935 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-ssh-key-openstack-edpm-ipam\") pod \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.976013 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-inventory\") pod \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.976036 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4698\" (UniqueName: \"kubernetes.io/projected/59e9c932-3b43-4d74-b776-c34cfe6b2a78-kube-api-access-r4698\") pod \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\" (UID: \"59e9c932-3b43-4d74-b776-c34cfe6b2a78\") " Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.981365 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "59e9c932-3b43-4d74-b776-c34cfe6b2a78" (UID: "59e9c932-3b43-4d74-b776-c34cfe6b2a78"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:34:06 crc kubenswrapper[4606]: I0123 17:34:06.983215 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e9c932-3b43-4d74-b776-c34cfe6b2a78-kube-api-access-r4698" (OuterVolumeSpecName: "kube-api-access-r4698") pod "59e9c932-3b43-4d74-b776-c34cfe6b2a78" (UID: "59e9c932-3b43-4d74-b776-c34cfe6b2a78"). InnerVolumeSpecName "kube-api-access-r4698". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.011328 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59e9c932-3b43-4d74-b776-c34cfe6b2a78" (UID: "59e9c932-3b43-4d74-b776-c34cfe6b2a78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.017722 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "59e9c932-3b43-4d74-b776-c34cfe6b2a78" (UID: "59e9c932-3b43-4d74-b776-c34cfe6b2a78"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.018873 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-inventory" (OuterVolumeSpecName: "inventory") pod "59e9c932-3b43-4d74-b776-c34cfe6b2a78" (UID: "59e9c932-3b43-4d74-b776-c34cfe6b2a78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.078664 4606 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.078709 4606 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.078724 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.078738 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59e9c932-3b43-4d74-b776-c34cfe6b2a78-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.078751 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4698\" (UniqueName: \"kubernetes.io/projected/59e9c932-3b43-4d74-b776-c34cfe6b2a78-kube-api-access-r4698\") on node \"crc\" DevicePath \"\"" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.465959 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" event={"ID":"59e9c932-3b43-4d74-b776-c34cfe6b2a78","Type":"ContainerDied","Data":"43800d8432c2c9e084e2b4d4b1ce99345ee3fe502678cda4d837835e86da9d72"} Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.466219 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43800d8432c2c9e084e2b4d4b1ce99345ee3fe502678cda4d837835e86da9d72" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.465992 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.604647 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2"] Jan 23 17:34:07 crc kubenswrapper[4606]: E0123 17:34:07.605028 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="registry-server" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.605044 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="registry-server" Jan 23 17:34:07 crc kubenswrapper[4606]: E0123 17:34:07.605063 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e9c932-3b43-4d74-b776-c34cfe6b2a78" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.605070 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e9c932-3b43-4d74-b776-c34cfe6b2a78" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 17:34:07 crc kubenswrapper[4606]: E0123 17:34:07.605088 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="extract-utilities" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.605095 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="extract-utilities" Jan 23 17:34:07 crc kubenswrapper[4606]: E0123 17:34:07.605110 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="extract-content" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.605115 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="extract-content" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.605297 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e9c932-3b43-4d74-b776-c34cfe6b2a78" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.605318 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a6f2a7-5091-4917-8af2-ab915b9f649a" containerName="registry-server" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.606786 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.610644 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.610679 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.610884 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.611230 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.615238 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.615560 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.615725 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.631342 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2"] Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.690673 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.690925 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.691015 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.691093 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.691164 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl6pq\" (UniqueName: \"kubernetes.io/projected/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-kube-api-access-bl6pq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.691227 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.691308 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.691395 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.691470 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799199 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799452 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799625 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799653 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799677 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799709 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799726 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl6pq\" (UniqueName: \"kubernetes.io/projected/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-kube-api-access-bl6pq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799805 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.799866 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.801306 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.804891 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.804976 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.805763 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.806421 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.807257 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.814992 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.817560 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl6pq\" (UniqueName: \"kubernetes.io/projected/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-kube-api-access-bl6pq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.818057 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l4ls2\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:07 crc kubenswrapper[4606]: I0123 17:34:07.924934 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:34:08 crc kubenswrapper[4606]: I0123 17:34:08.284458 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2"] Jan 23 17:34:08 crc kubenswrapper[4606]: I0123 17:34:08.474561 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" event={"ID":"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5","Type":"ContainerStarted","Data":"dc0cd55b3b56baedac887a6b1dbfc3efa6946c980a20fd0ecdf44fa1303113cf"} Jan 23 17:34:09 crc kubenswrapper[4606]: I0123 17:34:09.489777 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" event={"ID":"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5","Type":"ContainerStarted","Data":"0fbc2f68af54813be8da9edff9b8eb9b04be22c7727939d9a55442f6e487e7d0"} Jan 23 17:34:09 crc kubenswrapper[4606]: I0123 17:34:09.526050 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" podStartSLOduration=2.023753822 podStartE2EDuration="2.526032889s" podCreationTimestamp="2026-01-23 17:34:07 +0000 UTC" firstStartedPulling="2026-01-23 17:34:08.291804202 +0000 UTC m=+2482.291996618" lastFinishedPulling="2026-01-23 17:34:08.794083279 +0000 UTC m=+2482.794275685" observedRunningTime="2026-01-23 17:34:09.52196687 +0000 UTC m=+2483.522159276" watchObservedRunningTime="2026-01-23 17:34:09.526032889 +0000 UTC m=+2483.526225295" Jan 23 17:34:15 crc kubenswrapper[4606]: I0123 17:34:15.366679 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:34:15 crc kubenswrapper[4606]: E0123 17:34:15.367661 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:34:26 crc kubenswrapper[4606]: I0123 17:34:26.371932 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:34:26 crc kubenswrapper[4606]: E0123 17:34:26.372535 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:34:37 crc kubenswrapper[4606]: I0123 17:34:37.366255 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:34:37 crc kubenswrapper[4606]: E0123 17:34:37.366945 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:34:48 crc kubenswrapper[4606]: I0123 17:34:48.367540 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:34:48 crc kubenswrapper[4606]: E0123 17:34:48.368632 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:35:00 crc kubenswrapper[4606]: I0123 17:35:00.366021 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:35:00 crc kubenswrapper[4606]: E0123 17:35:00.367125 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:35:12 crc kubenswrapper[4606]: I0123 17:35:12.366982 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:35:12 crc kubenswrapper[4606]: E0123 17:35:12.367767 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:35:18 crc kubenswrapper[4606]: I0123 17:35:18.832873 4606 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-kpm7l container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:35:18 crc kubenswrapper[4606]: I0123 17:35:18.833433 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" podUID="f542164e-05c2-4746-988b-0b4676772078" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:35:22 crc kubenswrapper[4606]: I0123 17:35:22.621521 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 17:35:26 crc kubenswrapper[4606]: I0123 17:35:26.989794 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="2124b3c9-bd82-4e02-980c-b3de98193833" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.195:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:35:27 crc kubenswrapper[4606]: I0123 17:35:27.365796 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:35:27 crc kubenswrapper[4606]: E0123 17:35:27.366132 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:35:27 crc kubenswrapper[4606]: I0123 17:35:27.617282 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 17:35:28 crc kubenswrapper[4606]: I0123 17:35:28.828043 4606 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-kpm7l container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:35:28 crc kubenswrapper[4606]: I0123 17:35:28.828132 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" podUID="f542164e-05c2-4746-988b-0b4676772078" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:35:31 crc kubenswrapper[4606]: I0123 17:35:31.648239 4606 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qskqc container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:35:31 crc kubenswrapper[4606]: I0123 17:35:31.648625 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" podUID="4424c8ad-c7cc-418f-a322-ef8764b076ca" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:35:32 crc kubenswrapper[4606]: I0123 17:35:32.615556 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 17:35:32 crc kubenswrapper[4606]: I0123 17:35:32.615701 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 23 17:35:32 crc kubenswrapper[4606]: I0123 17:35:32.616770 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"2371365fe1d7ea6a8aa02a1387d8d67fbc8787bdd05c88cca89c4e3dbe5a0597"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 23 17:35:32 crc kubenswrapper[4606]: I0123 17:35:32.616860 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" containerID="cri-o://2371365fe1d7ea6a8aa02a1387d8d67fbc8787bdd05c88cca89c4e3dbe5a0597" gracePeriod=30 Jan 23 17:35:32 crc kubenswrapper[4606]: I0123 17:35:32.621392 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 23 17:35:36 crc kubenswrapper[4606]: E0123 17:35:36.680305 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:35:36 crc kubenswrapper[4606]: I0123 17:35:36.990283 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="2124b3c9-bd82-4e02-980c-b3de98193833" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.195:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:09 crc kubenswrapper[4606]: I0123 17:36:09.859404 4606 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qskqc container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": context deadline exceeded" start-of-body= Jan 23 17:36:10 crc kubenswrapper[4606]: I0123 17:36:10.008814 4606 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.126.11:9980/healthz\": context deadline exceeded" start-of-body= Jan 23 17:36:10 crc kubenswrapper[4606]: I0123 17:36:10.009252 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/healthz\": context deadline exceeded" Jan 23 17:36:10 crc kubenswrapper[4606]: I0123 17:36:10.109431 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" podUID="4424c8ad-c7cc-418f-a322-ef8764b076ca" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": context deadline exceeded" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.030133 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" podUID="7215e9f7-67ae-40ad-befc-e411c5acb586" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.122745 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.179172198s: [/var/lib/containers/storage/overlay/5c4dab2c3f7d406dde785a7dfb2e81962d561873a70092a51f30d66f0ee22ec5/diff /var/log/pods/openstack_nova-api-0_e390ccaf-fe1a-4fd0-b19f-64d5d43b2005/nova-api-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.085626 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.13052383s: [/var/lib/containers/storage/overlay/42c914b6e0d64ecbd11b3ae0a7718d32f53084630cc3983968d94ce303f8ddbc/diff /var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l4ls2_29ce98a4-4d13-42c9-84f4-b77dd0b80bf5/nova-edpm-deployment-openstack-edpm-ipam/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.170892 4606 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-j6ms6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.176772 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j6ms6" podUID="f0f78495-0b98-497a-9a7d-c93be8decda5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.178727 4606 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wc5zr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.178791 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" podUID="f51caafd-44cb-4d91-874e-d8fe0221d01d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.179070 4606 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-wc5zr container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.179089 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-wc5zr" podUID="f51caafd-44cb-4d91-874e-d8fe0221d01d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.179225 4606 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qskqc container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.179242 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" podUID="4424c8ad-c7cc-418f-a322-ef8764b076ca" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.179372 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" podUID="7215e9f7-67ae-40ad-befc-e411c5acb586" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.179798 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" podUID="d040f5d8-e2a8-4171-9f70-797e87f8af96" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.175647 4606 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qskqc container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.183224 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" podUID="4424c8ad-c7cc-418f-a322-ef8764b076ca" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.193970 4606 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-c42wt container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.194026 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-c42wt" podUID="205e3ea9-e17d-4581-bfc0-eef3cb89dc12" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.201518 4606 patch_prober.go:28] interesting pod/route-controller-manager-59cb4b7958-wnt88 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.201567 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-59cb4b7958-wnt88" podUID="24c512e7-14f3-41df-b8de-201736780cfb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.203875 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e591e35b-989c-4122-bd2b-7e732c294b72" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: TLS handshake timeout (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.204135 4606 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-psvzs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.204155 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" podUID="b470f1c1-e30e-401e-9bba-16d821bd7b4f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.204354 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" podUID="4d18bd93-7560-4405-9738-38876237699e" containerName="manager" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.228953 4606 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-psvzs container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.229005 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psvzs" podUID="b470f1c1-e30e-401e-9bba-16d821bd7b4f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.245929 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podUID="14094ea9-8be6-405b-a990-9bdbda2a50cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.304449 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.367886031s: [/var/lib/containers/storage/overlay/5ed6a6caa8cdee15816c75f2b5c4521d860bf28e70c942d1cbadc2ff7bd3547a/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.318801 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d4bdcdf98-bmdhd" podUID="f71cd544-6079-402d-a59f-0a3d4fb1d1f7" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.335567 4606 patch_prober.go:28] interesting pod/etcd-operator-b45778765-vjv2l container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: TLS handshake timeout" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.335640 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-b45778765-vjv2l" podUID="75d31efc-5fc5-4e4e-a893-93161b48002a" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: TLS handshake timeout" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.336157 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/placement-876fbb8bd-6zlfb" podUID="b498e8d0-f6bd-474c-8722-c0ccf11188bf" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.156:8778/\": net/http: TLS handshake timeout" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.377384 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.440832624s: [/var/lib/containers/storage/overlay/2dfe9a45c2b0d6a55e8aa53d71e98a602934d351d11ea46a01c0fbad2fb759a7/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.382526 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.44596667s: [/var/lib/containers/storage/overlay/1530b95adbdab1037110003d9d2cae1a8cb89f55e9dea76aa5140760dd57c974/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.384839 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.448272307s: [/var/lib/containers/storage/overlay/c5a70a3cd27447db24f34539986144b605bb8200c21c74683c439ae463947827/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.435502 4606 trace.go:236] Trace[1651680363]: "iptables ChainExists" (23-Jan-2026 17:36:09.939) (total time: 18495ms): Jan 23 17:36:28 crc kubenswrapper[4606]: Trace[1651680363]: [18.495809848s] [18.495809848s] END Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.488268 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.488492 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.516904 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.577226588s: [/var/lib/containers/storage/overlay/5e7b3b37a222a09bbed650260f3e81d753f004807503d8cfaea6a5e3d64aefb6/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.527200 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.529270 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.576224 4606 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-whw2p container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.576277 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" podUID="9e9e2c8c-5d03-48b0-ab9c-110eeef212b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.584572 4606 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-whw2p container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.584633 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" podUID="9e9e2c8c-5d03-48b0-ab9c-110eeef212b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.586576 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d4bdcdf98-bmdhd" podUID="f71cd544-6079-402d-a59f-0a3d4fb1d1f7" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.165:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.590538 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.653986043s: [/var/lib/containers/storage/overlay/662892fd801bed833acff66995eadca40e19ecb1803d4b2b7074e895e4ada4c6/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.597677 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podUID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.598782 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podUID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.602544 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podUID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.604280 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podUID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.604372 4606 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.604423 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.605377 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.668696963s: [/var/lib/containers/storage/overlay/f797ea49e5c7c81f25a228fbe37a59a27e7c002a7f1868e522102bc9881c33c6/diff /var/log/pods/cert-manager_cert-manager-858654f9db-69xjr_746ecbf9-06c9-43d1-91fe-2269b27278bb/cert-manager-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.606840 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.670150749s: [/var/lib/containers/storage/overlay/84a473def097b5ea6ac463e59c9ce43ead0141cdd539e85621495b3cd197d017/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.606932 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.670379794s: [/var/lib/containers/storage/overlay/0c077dbfbf68fc8830829dd7efca5a74ccc511daa3cfd44050bc5754c4134d84/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.607290 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.67060652s: [/var/lib/containers/storage/overlay/1df0d2272146c8bb2441b3a8812c9207f24dd5660a19b5bc2c36bbac31921461/diff /var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-h97mh_cb8c2204-661e-4daf-b748-c329e81e04a8/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.651001 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.711327626s: [/var/lib/containers/storage/overlay/f42b7ca4836d048a4644f8c47bec74cd07417b368b5795501607a75d33886f11/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.669719 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6786d7bbc5-25lfl" podUID="10c3e688-60f4-483e-962b-9e46bd25cfb7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.170:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.672623 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.732941774s: [/var/lib/containers/storage/overlay/db17101d3eb499fe1a84b647156c4cd903c53ddfe35907c876ad788d4c6c387b/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.674235 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" podUID="6ab09cf8-68a3-4fac-aa7b-824a2626b3db" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.674516 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" podUID="d040f5d8-e2a8-4171-9f70-797e87f8af96" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: E0123 17:36:28.703371 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704240 4606 reflector.go:484] object-"openstack"/"openstack-config-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704310 4606 reflector.go:484] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704366 4606 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704419 4606 reflector.go:484] object-"openshift-marketplace"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704458 4606 reflector.go:484] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704514 4606 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704548 4606 reflector.go:484] object-"openshift-nmstate"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704602 4606 reflector.go:484] object-"metallb-system"/"frr-k8s-daemon-dockercfg-5dlz9": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704632 4606 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704696 4606 reflector.go:484] object-"cert-manager"/"cert-manager-cainjector-dockercfg-hwpd5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704742 4606 reflector.go:484] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704772 4606 reflector.go:484] object-"openstack"/"cinder-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704817 4606 reflector.go:484] object-"openstack"/"glance-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704845 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704872 4606 reflector.go:484] object-"metallb-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704916 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.704973 4606 reflector.go:484] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705002 4606 reflector.go:484] object-"openstack"/"ovndbcluster-nb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705032 4606 reflector.go:484] object-"openstack"/"cert-galera-openstack-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705088 4606 reflector.go:484] object-"openstack"/"glance-default-internal-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705141 4606 reflector.go:484] object-"openstack"/"rabbitmq-server-dockercfg-bbn8s": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705177 4606 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705230 4606 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705265 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705345 4606 reflector.go:484] object-"openshift-apiserver"/"image-import-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705402 4606 reflector.go:484] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705440 4606 reflector.go:484] object-"metallb-system"/"frr-k8s-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705483 4606 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705511 4606 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705538 4606 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705565 4606 reflector.go:484] object-"openshift-authentication"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705642 4606 reflector.go:484] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705670 4606 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705707 4606 reflector.go:484] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wzgq4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705736 4606 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705763 4606 reflector.go:484] object-"openstack"/"dns-swift-storage-0": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705788 4606 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705856 4606 reflector.go:484] object-"openstack-operators"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705886 4606 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-crtm2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705917 4606 reflector.go:484] object-"openshift-authentication-operator"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705953 4606 reflector.go:484] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.705979 4606 reflector.go:484] object-"openstack-operators"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706024 4606 reflector.go:484] object-"openstack"/"ovsdbserver-nb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706054 4606 reflector.go:484] object-"openstack"/"cert-neutron-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706081 4606 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706131 4606 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706158 4606 reflector.go:484] object-"openstack"/"ovndbcluster-sb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706185 4606 reflector.go:484] object-"cert-manager"/"cert-manager-dockercfg-v8dvx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706261 4606 reflector.go:484] object-"openstack"/"cert-glance-default-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706290 4606 reflector.go:484] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zhmfb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706320 4606 reflector.go:484] object-"openstack"/"cert-cinder-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706354 4606 reflector.go:484] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706381 4606 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706407 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706438 4606 reflector.go:484] object-"openshift-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706491 4606 reflector.go:484] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706527 4606 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706562 4606 reflector.go:484] object-"openstack"/"horizon": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706836 4606 reflector.go:484] object-"openstack"/"cert-memcached-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706834 4606 reflector.go:484] object-"openshift-console"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.706868 4606 reflector.go:484] object-"metallb-system"/"metallb-excludel2": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707083 4606 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707112 4606 reflector.go:484] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707128 4606 reflector.go:484] object-"openstack"/"barbican-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707144 4606 reflector.go:484] object-"openstack"/"nova-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707161 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: E0123 17:36:28.707096 4606 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": http2: client connection lost" event="&Event{ObjectMeta:{ceilometer-0.188d6caddd2d979d openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:ceilometer-0,UID:dd93c16a-9cd2-4011-a291-40070e3bfe74,APIVersion:v1,ResourceVersion:46423,FieldPath:spec.containers{ceilometer-central-agent},},Reason:Unhealthy,Message:Liveness probe failed: command timed out,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 17:35:22.622367645 +0000 UTC m=+2556.622560111,LastTimestamp:2026-01-23 17:35:22.622367645 +0000 UTC m=+2556.622560111,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707198 4606 reflector.go:484] object-"openshift-machine-api"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707229 4606 reflector.go:484] object-"openshift-dns-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707245 4606 reflector.go:484] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lqw46": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707252 4606 reflector.go:484] object-"openshift-nmstate"/"openshift-nmstate-webhook": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707199 4606 reflector.go:484] object-"openstack"/"neutron-httpd-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707290 4606 reflector.go:484] object-"openstack"/"horizon-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707413 4606 reflector.go:484] object-"openshift-console"/"oauth-serving-cert": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707437 4606 reflector.go:484] object-"openstack"/"cert-neutron-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707479 4606 reflector.go:484] object-"openstack"/"ovncontroller-metrics-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707492 4606 reflector.go:484] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707499 4606 reflector.go:484] object-"openstack"/"openstack-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707508 4606 reflector.go:484] object-"openstack"/"cert-placement-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707524 4606 reflector.go:484] object-"openshift-service-ca"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707557 4606 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707573 4606 reflector.go:484] object-"openstack"/"nova-metadata-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707626 4606 reflector.go:484] object-"metallb-system"/"manager-account-dockercfg-9lpqn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707525 4606 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707556 4606 reflector.go:484] object-"openstack"/"cert-ceilometer-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707652 4606 reflector.go:484] object-"metallb-system"/"frr-k8s-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707685 4606 reflector.go:484] object-"openstack"/"dnsmasq-dns-dockercfg-qs7v4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707698 4606 reflector.go:484] object-"openstack"/"ovndbcluster-sb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707723 4606 reflector.go:484] object-"openstack"/"rabbitmq-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707747 4606 reflector.go:484] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707769 4606 reflector.go:484] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kfww6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707785 4606 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707808 4606 reflector.go:484] object-"openstack"/"cert-rabbitmq-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707823 4606 reflector.go:484] object-"openstack"/"cert-nova-metadata-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707826 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707687 4606 reflector.go:484] object-"metallb-system"/"speaker-dockercfg-v88zl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707865 4606 reflector.go:484] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707883 4606 reflector.go:484] object-"openstack"/"rabbitmq-cell1-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707890 4606 reflector.go:484] object-"openshift-ingress"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707747 4606 reflector.go:484] object-"openstack"/"rabbitmq-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707787 4606 reflector.go:484] object-"openshift-ingress-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707956 4606 reflector.go:484] object-"openshift-machine-api"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.707994 4606 reflector.go:484] object-"openstack-operators"/"metrics-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708008 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708043 4606 reflector.go:484] object-"openstack"/"nova-migration-ssh-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708060 4606 reflector.go:484] object-"openstack"/"ovncontroller-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708139 4606 reflector.go:484] object-"metallb-system"/"controller-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708191 4606 reflector.go:484] object-"openstack"/"dns-svc": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708218 4606 reflector.go:484] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-65vwn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708247 4606 reflector.go:484] object-"openstack"/"cert-swift-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708279 4606 reflector.go:484] object-"openstack-operators"/"openstack-operator-index-dockercfg-wfjfp": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708304 4606 reflector.go:484] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xwkxj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708328 4606 reflector.go:484] object-"openstack"/"openstack-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708365 4606 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708392 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708422 4606 reflector.go:484] object-"openshift-machine-config-operator"/"mcc-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708451 4606 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708470 4606 reflector.go:484] object-"openstack"/"openstack-edpm-ipam": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708501 4606 reflector.go:484] object-"openshift-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708544 4606 reflector.go:484] object-"openshift-console-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708575 4606 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708714 4606 reflector.go:484] object-"openshift-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708748 4606 reflector.go:484] object-"openshift-marketplace"/"marketplace-trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708774 4606 reflector.go:484] object-"openshift-nmstate"/"nmstate-handler-dockercfg-5n9fm": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708817 4606 reflector.go:484] object-"openshift-dns-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708845 4606 reflector.go:484] object-"metallb-system"/"controller-dockercfg-gw2ls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708870 4606 reflector.go:484] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rptcj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708896 4606 reflector.go:484] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-d265k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708927 4606 reflector.go:484] object-"openstack"/"nova-cell1-compute-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.708999 4606 reflector.go:484] object-"openstack"/"cert-galera-openstack-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709028 4606 reflector.go:484] object-"openstack"/"nova-cell1-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709056 4606 reflector.go:484] object-"openstack"/"nova-cell1-novncproxy-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709082 4606 reflector.go:484] object-"openstack"/"rabbitmq-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709106 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709130 4606 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709164 4606 reflector.go:484] object-"openshift-cluster-version"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709190 4606 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709215 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709249 4606 reflector.go:484] object-"openstack"/"horizon-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709273 4606 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709298 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709321 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709349 4606 reflector.go:484] object-"openshift-ingress"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709374 4606 reflector.go:484] object-"openstack"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709398 4606 reflector.go:484] object-"openstack"/"horizon-horizon-dockercfg-7b9bg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709424 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709448 4606 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709471 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709494 4606 reflector.go:484] object-"metallb-system"/"speaker-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709518 4606 reflector.go:484] object-"metallb-system"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709541 4606 reflector.go:484] object-"openstack"/"galera-openstack-dockercfg-tmwt7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709565 4606 reflector.go:484] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5hx8t": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709625 4606 reflector.go:484] object-"openstack"/"swift-swift-dockercfg-r67dr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709651 4606 reflector.go:484] object-"openstack"/"ceilometer-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709674 4606 reflector.go:484] object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709699 4606 reflector.go:484] object-"openshift-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709722 4606 reflector.go:484] object-"openshift-ingress"/"router-metrics-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709747 4606 reflector.go:484] object-"openstack"/"openstackclient-openstackclient-dockercfg-2c6nx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709771 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709794 4606 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709817 4606 reflector.go:484] object-"openshift-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709841 4606 reflector.go:484] object-"openstack"/"nova-cell0-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709866 4606 reflector.go:484] object-"openstack"/"telemetry-ceilometer-dockercfg-gslrz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709888 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709911 4606 reflector.go:484] object-"openshift-authentication-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709943 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709957 4606 reflector.go:484] object-"openstack"/"ceilometer-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.709987 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710010 4606 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710033 4606 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710066 4606 reflector.go:484] object-"openstack"/"cert-glance-default-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710091 4606 reflector.go:484] object-"openshift-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710114 4606 reflector.go:484] object-"openshift-console-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710137 4606 reflector.go:484] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710163 4606 reflector.go:484] object-"cert-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710185 4606 reflector.go:484] object-"openstack"/"cert-ovn-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710212 4606 reflector.go:484] object-"openshift-console-operator"/"console-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710236 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-error": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710258 4606 reflector.go:484] object-"openshift-console-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710284 4606 reflector.go:484] object-"openstack"/"cert-nova-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710307 4606 reflector.go:484] object-"openshift-console"/"console-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710329 4606 reflector.go:484] object-"openstack"/"keystone-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710352 4606 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710382 4606 reflector.go:484] object-"openshift-ingress-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710427 4606 reflector.go:484] object-"openstack"/"cert-swift-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.710450 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-login": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.711912 4606 reflector.go:484] object-"openstack"/"dataplanenodeset-openstack-edpm-ipam": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.711956 4606 reflector.go:484] object-"openshift-apiserver"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.711977 4606 reflector.go:484] object-"openstack"/"cinder-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.711995 4606 reflector.go:484] object-"openshift-controller-manager"/"openshift-global-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712020 4606 reflector.go:484] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712045 4606 reflector.go:484] object-"openstack"/"swift-conf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712073 4606 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712097 4606 reflector.go:484] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712121 4606 reflector.go:484] object-"openshift-image-registry"/"installation-pull-secrets": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712148 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712172 4606 reflector.go:484] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-f68z4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712197 4606 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712228 4606 reflector.go:484] object-"openstack-operators"/"infra-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712252 4606 reflector.go:484] object-"openshift-authentication"/"audit": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712285 4606 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712311 4606 reflector.go:484] object-"openshift-console"/"service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712336 4606 reflector.go:484] object-"openshift-service-ca"/"signing-cabundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712368 4606 reflector.go:484] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-p7nqh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712391 4606 reflector.go:484] object-"openstack"/"nova-extra-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712414 4606 reflector.go:484] object-"openshift-config-operator"/"config-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712439 4606 reflector.go:484] object-"openstack"/"galera-openstack-cell1-dockercfg-vwp4v": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712464 4606 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712497 4606 reflector.go:484] object-"openstack"/"cert-kube-state-metrics-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712521 4606 reflector.go:484] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cv87c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712542 4606 reflector.go:484] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xr9l2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712566 4606 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712608 4606 reflector.go:484] object-"openshift-route-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712637 4606 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712661 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712685 4606 reflector.go:484] object-"metallb-system"/"metallb-memberlist": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712708 4606 reflector.go:484] object-"openstack"/"swift-ring-files": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712732 4606 reflector.go:484] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712775 4606 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712798 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712820 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"pprof-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712849 4606 reflector.go:484] object-"metallb-system"/"frr-startup": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712872 4606 reflector.go:484] object-"openstack"/"combined-ca-bundle": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712895 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712917 4606 reflector.go:484] object-"openshift-nmstate"/"nmstate-operator-dockercfg-2d8qz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712940 4606 reflector.go:484] object-"openstack-operators"/"webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712962 4606 reflector.go:484] object-"openstack"/"memcached-memcached-dockercfg-kqgz5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.712984 4606 reflector.go:484] object-"openshift-console-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713013 4606 reflector.go:484] object-"openstack"/"neutron-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713035 4606 reflector.go:484] object-"openshift-route-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713057 4606 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713079 4606 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sknqq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713102 4606 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713126 4606 reflector.go:484] object-"openstack"/"glance-glance-dockercfg-wjzzh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713149 4606 reflector.go:484] object-"openshift-ingress-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713170 4606 reflector.go:484] object-"openshift-ingress-canary"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713191 4606 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-dockercfg-qpkp2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713215 4606 reflector.go:484] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713236 4606 reflector.go:484] object-"openshift-etcd-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713256 4606 reflector.go:484] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713280 4606 reflector.go:484] object-"openstack"/"cert-nova-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713302 4606 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h5glf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713324 4606 reflector.go:484] object-"openstack"/"cinder-cinder-dockercfg-5rrnn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713348 4606 reflector.go:484] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xzzh6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713370 4606 reflector.go:484] object-"openstack"/"keystone": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713383 4606 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713405 4606 reflector.go:484] object-"openstack"/"memcached-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713428 4606 reflector.go:484] object-"openstack"/"cert-ovncontroller-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713450 4606 reflector.go:484] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pnqkc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713461 4606 reflector.go:484] object-"openstack"/"ovnnorthd-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713567 4606 reflector.go:484] object-"openstack"/"cert-ovndbcluster-sb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713581 4606 reflector.go:484] object-"openshift-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713617 4606 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713701 4606 reflector.go:484] object-"openstack"/"ovnnorthd-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713793 4606 reflector.go:484] object-"openshift-ingress-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713802 4606 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713851 4606 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713875 4606 reflector.go:484] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fgd6m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713882 4606 reflector.go:484] object-"openshift-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713898 4606 reflector.go:484] object-"openstack"/"cert-cinder-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713917 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713927 4606 reflector.go:484] object-"hostpath-provisioner"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713938 4606 reflector.go:484] object-"openshift-network-diagnostics"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713819 4606 reflector.go:484] object-"openstack"/"openstack-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714011 4606 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-f5qpg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714026 4606 reflector.go:484] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qt8n2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.713836 4606 reflector.go:484] object-"openstack"/"nova-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714052 4606 reflector.go:484] object-"openshift-service-ca-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714056 4606 reflector.go:484] object-"openshift-machine-api"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714035 4606 reflector.go:484] object-"openstack"/"glance-default-external-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714075 4606 reflector.go:484] object-"openshift-console"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714098 4606 reflector.go:484] object-"openstack"/"cert-barbican-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714100 4606 reflector.go:484] object-"openshift-nmstate"/"plugin-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714129 4606 reflector.go:484] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j4bws": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714149 4606 reflector.go:484] object-"openshift-dns"/"dns-default": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714167 4606 reflector.go:484] object-"openstack"/"placement-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714179 4606 reflector.go:484] object-"openstack"/"openstack-cell1-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714189 4606 reflector.go:484] object-"openshift-dns"/"dns-dockercfg-jwfmh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714211 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714231 4606 reflector.go:484] object-"openstack"/"cert-neutron-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714252 4606 reflector.go:484] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-njw5g": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714274 4606 reflector.go:484] object-"openstack"/"kube-state-metrics-tls-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714151 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714305 4606 reflector.go:484] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714342 4606 reflector.go:484] object-"openshift-ingress"/"router-dockercfg-zdk86": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714367 4606 reflector.go:484] object-"openstack"/"neutron-neutron-dockercfg-ffzkl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714387 4606 reflector.go:484] object-"openshift-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: E0123 17:36:28.714385 4606 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0\": http2: client connection lost" pod="openstack/ovsdbserver-sb-0" volumeName="ovndbcluster-sb-etc-ovn" Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714407 4606 reflector.go:484] object-"openstack"/"keystone-keystone-dockercfg-cxp6x": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714433 4606 reflector.go:484] object-"openshift-console"/"console-oauth-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714453 4606 reflector.go:484] object-"openstack"/"cert-ovndbcluster-nb-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714495 4606 reflector.go:484] object-"openshift-route-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714516 4606 reflector.go:484] object-"openstack"/"cert-placement-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714529 4606 reflector.go:484] object-"openshift-machine-config-operator"/"mco-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714551 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-router-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714561 4606 reflector.go:484] object-"openstack"/"openstack-aee-default-env": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714574 4606 reflector.go:484] object-"openshift-machine-config-operator"/"node-bootstrapper-token": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714622 4606 reflector.go:484] object-"openstack"/"dataplane-ansible-ssh-private-key-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714645 4606 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714653 4606 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714667 4606 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714646 4606 reflector.go:484] object-"openshift-nmstate"/"nginx-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714705 4606 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714730 4606 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714749 4606 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714753 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714768 4606 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714775 4606 reflector.go:484] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714788 4606 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714791 4606 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714812 4606 reflector.go:484] object-"openstack"/"cert-keystone-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714819 4606 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714847 4606 reflector.go:484] object-"openshift-authentication-operator"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714865 4606 reflector.go:484] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714885 4606 reflector.go:484] object-"openshift-multus"/"multus-admission-controller-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714893 4606 reflector.go:484] object-"openshift-service-ca-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714915 4606 reflector.go:484] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714938 4606 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714949 4606 reflector.go:484] object-"openstack"/"nova-nova-dockercfg-v5b5x": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714966 4606 reflector.go:484] object-"openshift-image-registry"/"image-registry-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714991 4606 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715011 4606 reflector.go:484] object-"openshift-ingress"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714708 4606 reflector.go:484] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715067 4606 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715085 4606 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715109 4606 reflector.go:484] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714824 4606 reflector.go:484] object-"openshift-image-registry"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714869 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715143 4606 reflector.go:484] object-"openstack"/"placement-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715172 4606 reflector.go:484] object-"openstack"/"ovndbcluster-nb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715179 4606 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715196 4606 reflector.go:484] object-"openshift-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715201 4606 reflector.go:484] object-"openstack"/"openstack-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715217 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714552 4606 reflector.go:484] object-"openstack"/"cert-keystone-internal-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715240 4606 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715244 4606 reflector.go:484] object-"openstack"/"rabbitmq-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715258 4606 reflector.go:484] object-"openstack"/"rabbitmq-cell1-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715271 4606 reflector.go:484] object-"openstack"/"keystone-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714668 4606 reflector.go:484] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715261 4606 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715326 4606 reflector.go:484] object-"openshift-console"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714728 4606 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714848 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714919 4606 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715408 4606 reflector.go:484] object-"openstack"/"rabbitmq-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715425 4606 reflector.go:484] object-"cert-manager"/"cert-manager-webhook-dockercfg-xvsfn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715013 4606 reflector.go:484] object-"openshift-console"/"console-dockercfg-f62pw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715459 4606 reflector.go:484] object-"openshift-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715471 4606 reflector.go:484] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2m76w": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715483 4606 reflector.go:484] object-"openshift-marketplace"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715494 4606 reflector.go:484] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vjw5l": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715505 4606 reflector.go:484] object-"openshift-ingress-canary"/"canary-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715510 4606 reflector.go:484] object-"openshift-authentication"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715530 4606 reflector.go:484] object-"openstack"/"dns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715546 4606 reflector.go:484] object-"openstack"/"barbican-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715557 4606 reflector.go:484] object-"openshift-route-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714991 4606 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: E0123 17:36:28.716559 4606 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": http2: client connection lost" Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715575 4606 reflector.go:484] object-"openshift-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.714967 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715219 4606 reflector.go:484] object-"openstack"/"swift-proxy-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715126 4606 reflector.go:484] object-"openstack"/"placement-placement-dockercfg-7sdg2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715614 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-session": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715630 4606 reflector.go:484] object-"openshift-authentication-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715666 4606 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715684 4606 reflector.go:484] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bq96s": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715691 4606 reflector.go:484] object-"openshift-etcd-operator"/"etcd-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715697 4606 reflector.go:484] object-"openstack"/"cinder-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715712 4606 reflector.go:484] object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715727 4606 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715799 4606 reflector.go:484] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715814 4606 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715825 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715842 4606 reflector.go:484] object-"openshift-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715857 4606 reflector.go:484] object-"openshift-nmstate"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715874 4606 reflector.go:484] object-"openstack"/"cert-rabbitmq-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715890 4606 reflector.go:484] object-"openshift-dns"/"dns-default-metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715906 4606 reflector.go:484] object-"openstack"/"cinder-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715922 4606 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715932 4606 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-q4j7f": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715950 4606 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715965 4606 reflector.go:484] object-"openshift-ingress"/"router-stats-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715980 4606 reflector.go:484] object-"openshift-nmstate"/"default-dockercfg-pwm5q": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715996 4606 reflector.go:484] object-"openstack"/"swift-storage-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716011 4606 reflector.go:484] object-"metallb-system"/"metallb-webhook-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716018 4606 reflector.go:484] object-"openshift-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716035 4606 reflector.go:484] object-"openstack"/"rabbitmq-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716049 4606 reflector.go:484] object-"openstack"/"rabbitmq-cell1-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716069 4606 reflector.go:484] object-"openshift-dns-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716083 4606 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716095 4606 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716118 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716134 4606 reflector.go:484] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716150 4606 reflector.go:484] object-"openstack"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715282 4606 reflector.go:484] object-"openshift-image-registry"/"image-registry-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716164 4606 reflector.go:484] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5lnmm": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716173 4606 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715367 4606 reflector.go:484] object-"openshift-cluster-version"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.715382 4606 reflector.go:484] object-"openstack"/"barbican-barbican-dockercfg-5kppl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716191 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716205 4606 reflector.go:484] object-"openshift-authentication-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716219 4606 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716236 4606 reflector.go:484] object-"openshift-service-ca"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716253 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716269 4606 reflector.go:484] object-"openshift-etcd-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716284 4606 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716297 4606 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716312 4606 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716327 4606 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716344 4606 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716360 4606 reflector.go:484] object-"openshift-console"/"console-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716376 4606 reflector.go:484] object-"openshift-service-ca"/"signing-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716390 4606 reflector.go:484] object-"openshift-oauth-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716406 4606 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716422 4606 reflector.go:484] object-"openshift-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716437 4606 reflector.go:484] object-"openshift-ingress"/"router-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716451 4606 reflector.go:484] object-"hostpath-provisioner"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716466 4606 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716481 4606 reflector.go:484] object-"openstack"/"ovsdbserver-sb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716495 4606 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716509 4606 reflector.go:484] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716521 4606 reflector.go:484] object-"openstack"/"cert-ovnnorthd-ovndbs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716526 4606 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-cliconfig": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716579 4606 reflector.go:484] object-"cert-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716612 4606 reflector.go:484] object-"openstack"/"cert-horizon-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716627 4606 reflector.go:484] object-"openstack"/"barbican-worker-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716646 4606 reflector.go:484] object-"openstack"/"cert-barbican-public-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716660 4606 reflector.go:484] object-"openstack"/"barbican-keystone-listener-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716681 4606 reflector.go:484] object-"openshift-ingress-canary"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716701 4606 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716735 4606 reflector.go:484] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.716751 4606 reflector.go:484] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: W0123 17:36:28.723665 4606 reflector.go:484] object-"openshift-cluster-samples-operator"/"samples-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.728285 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:38040->192.168.126.11:10257: read: connection reset by peer" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.728315 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:38040->192.168.126.11:10257: read: connection reset by peer" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.746424 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection reset by peer" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.746794 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection reset by peer" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.758026 4606 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-kpm7l container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.758067 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" podUID="f542164e-05c2-4746-988b-0b4676772078" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.784821 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.784873 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.770480 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" podUID="9ca388c9-7c19-4efb-93da-2d889c972fa7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": read tcp 10.217.0.2:34912->10.217.0.55:8081: read: connection reset by peer" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.795665 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.795922 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.796277 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" podUID="9ca388c9-7c19-4efb-93da-2d889c972fa7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": dial tcp 10.217.0.55:8081: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.796418 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.796492 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.801292 4606 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 18.861620449s: [/var/lib/containers/storage/overlay/27a24b33c723dbd41b6589fd748cda394915cb5be29ad25ece7a9bb4faef1adb/diff ]; will not log again for this container unless duration exceeds 2s Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.849551 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.849824 4606 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T17:36:28Z","lastTransitionTime":"2026-01-23T17:36:28Z","reason":"KubeletNotReady","message":"container runtime is down"} Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.851179 4606 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.126.11:9980/readyz\": context deadline exceeded" start-of-body= Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.851214 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/readyz\": context deadline exceeded" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.861364 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" podUID="7671c11a-cd18-4409-addc-caeb99b22cae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.897247 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" podUID="62e61905-d16f-41cc-b972-f788ced5c022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": read tcp 10.217.0.2:54994->10.217.0.80:8081: read: connection reset by peer" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.897293 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" podUID="9b69bbaa-8277-48b5-b0aa-f8d7b7fec647" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": read tcp 10.217.0.2:39776->10.217.0.71:8081: read: connection reset by peer" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.898633 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" podUID="62e61905-d16f-41cc-b972-f788ced5c022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.917822 4606 trace.go:236] Trace[1239193368]: "iptables ChainExists" (23-Jan-2026 17:36:09.940) (total time: 18977ms): Jan 23 17:36:28 crc kubenswrapper[4606]: Trace[1239193368]: [18.977240135s] [18.977240135s] END Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.957213 4606 scope.go:117] "RemoveContainer" containerID="4929362670ff859f1305171581b6c8f327a2ad3201fba6d86db4a445d14860ce" Jan 23 17:36:28 crc kubenswrapper[4606]: I0123 17:36:28.968352 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" podUID="9ca388c9-7c19-4efb-93da-2d889c972fa7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": dial tcp 10.217.0.55:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: E0123 17:36:29.013147 4606 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="50.648s" Jan 23 17:36:29 crc kubenswrapper[4606]: E0123 17:36:29.033000 4606 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": the object has been modified; please apply your changes to the latest version and try again" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.097267 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.097305 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.101809 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qskqc" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.101863 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.101883 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.108971 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" podUID="9ca388c9-7c19-4efb-93da-2d889c972fa7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": dial tcp 10.217.0.55:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.110428 4606 scope.go:117] "RemoveContainer" containerID="6bd41e3848a02dcd168cc24c5a2496cbaedc8e12934ff9e52e20f3e1720db312" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.151569 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"2415c450d942872f3918f80fea9a1a63de4308a39c93bd4c9a76ea8e687cf21f"} pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.151635 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" podUID="f542164e-05c2-4746-988b-0b4676772078" containerName="authentication-operator" containerID="cri-o://2415c450d942872f3918f80fea9a1a63de4308a39c93bd4c9a76ea8e687cf21f" gracePeriod=30 Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.159423 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" podUID="5cfbb45f-ca05-4e78-8392-b905640374fd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.226490 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" podUID="7215e9f7-67ae-40ad-befc-e411c5acb586" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.241855 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:36:29 crc kubenswrapper[4606]: E0123 17:36:29.242432 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.252850 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.252915 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.259943 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" podUID="29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.261993 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" podUID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.263725 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:36:29 crc kubenswrapper[4606]: E0123 17:36:29.280046 4606 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae13310_86a4_4a3f_9967_cb88d1bfa536.slice/crio-4757c65b3ffd5c6be2c30c0e6b1e7dc3a563783965837e3f1fced5bb323d6ae1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566e35f9_5c83_4267_bcf6_898458e0a722.slice/crio-936fd703289b400be2e820f661b3d1938292f4620df7e1c8f3d13cbaec3d0d31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a402b82855acb2e82d07fb32eb7ed90ca465305d6fbd7e170f327f68b803be5f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae465942_87ad_4a76_8ead_5367413aa9a1.slice/crio-39a6130538689482a2cd5a074a06d172c646a93ad9ab688144748331b23b02ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca388c9_7c19_4efb_93da_2d889c972fa7.slice/crio-fb6bb4f275a5cf666f03c01b72d9164b4bc8e9ec4fe9086f6b23b06f2caaa70f.scope\": RecentStats: unable to find data in memory cache]" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.424697 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.425016 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" podUID="3112e8c9-b0cb-4d5e-bba9-1caf429b45dc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.467464 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" podUID="0ba239ef-03fb-4744-95cb-5aea8d9b0663" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.478432 4606 scope.go:117] "RemoveContainer" containerID="1e7f24a63bce23dc885ead5faf43e9f19f6bd211ef5d0acb3f8484dc49afcbbf" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.501130 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" podUID="566e35f9-5c83-4267-bcf6-898458e0a722" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.535824 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" podUID="23ed80ea-dfab-4552-86d1-972563d8d4b6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.582480 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" podUID="d040f5d8-e2a8-4171-9f70-797e87f8af96" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.672820 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" podUID="cb8c2204-661e-4daf-b748-c329e81e04a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.707367 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" podUID="ae465942-87ad-4a76-8ead-5367413aa9a1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 23 17:36:29 crc kubenswrapper[4606]: I0123 17:36:29.901887 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.006055 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" podUID="14094ea9-8be6-405b-a990-9bdbda2a50cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.196204 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.331918 4606 trace.go:236] Trace[1484264853]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-mb5ws" (23-Jan-2026 17:36:28.131) (total time: 2200ms): Jan 23 17:36:30 crc kubenswrapper[4606]: Trace[1484264853]: [2.200628587s] [2.200628587s] END Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.460320 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.460555 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.462110 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.462373 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.462944 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.463119 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.463255 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.463363 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.463601 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.463847 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.464311 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.464558 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.464681 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.464701 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.464862 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.465309 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.465600 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.465789 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.465934 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.470828 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.471080 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.471192 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472039 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472125 4606 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472226 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472259 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472368 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472382 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472878 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.472899 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.473025 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.473093 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.473126 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5lnmm" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.473175 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2d8qz" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.473262 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.473346 4606 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.474211 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.474329 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.474540 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.474855 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.474945 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475060 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475158 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475282 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475387 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475485 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475551 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475635 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475671 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475728 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475777 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475802 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475848 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475881 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475958 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.475962 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pwm5q" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476020 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476054 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476119 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476182 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476210 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476240 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476267 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476319 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476335 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476384 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476401 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476436 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476479 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476491 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476187 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476025 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476566 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476212 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476654 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476390 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476721 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7b9bg" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476789 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476814 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476031 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476919 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.476919 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.478678 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.478775 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.482617 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.519540 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.519797 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.519914 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vjw5l" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.539277 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.541972 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.541998 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.542022 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.542152 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-f68z4" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.542197 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.542220 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.542333 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.542437 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.542455 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lqw46" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545138 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545156 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545192 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545298 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545339 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545463 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545503 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545572 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545609 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.545759 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551031 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551076 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551133 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551081 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-p7nqh" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551331 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551353 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bq96s" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551409 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551489 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551511 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551693 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sknqq" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551808 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.551913 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cv87c" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.552002 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pnqkc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.552087 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.552022 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.552051 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.554159 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" podUID="1f333c20-09e6-410a-88d1-b2e6bfc73768" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/healthz\": dial tcp 10.217.0.52:8081: connect: connection refused" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.554164 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" podUID="1f333c20-09e6-410a-88d1-b2e6bfc73768" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": dial tcp 10.217.0.52:8081: connect: connection refused" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.609799 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xwkxj" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.610039 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.610183 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.610327 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.610475 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.610637 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.610801 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wzgq4" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.610946 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-q4j7f" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.611093 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.611242 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.611370 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.639168 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bbn8s" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.669804 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" podUID="194942ca-b627-42a1-9491-9e00b44d51d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.670102 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" podUID="194942ca-b627-42a1-9491-9e00b44d51d7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": dial tcp 10.217.0.72:8081: connect: connection refused" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.756533 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.759177 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.759786 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.759872 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.759972 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.770321 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.770604 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.770953 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zhmfb" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.782545 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.783719 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kqgz5" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.800335 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wfjfp" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.814313 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.814628 4606 request.go:700] Waited for 1.019710263s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=59024 Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.817744 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.830246 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.834611 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qt8n2" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.834935 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.835062 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tmwt7" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.835171 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.839113 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.848884 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.902665 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.902898 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.903041 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.938977 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.941727 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" podUID="62e61905-d16f-41cc-b972-f788ced5c022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.941780 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.941777 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" podUID="62e61905-d16f-41cc-b972-f788ced5c022" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.944432 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.945688 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.946258 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.946744 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.946768 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.947437 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.947817 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.970397 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.972156 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 17:36:30 crc kubenswrapper[4606]: I0123 17:36:30.994363 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qs7v4" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.022060 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.047394 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.061046 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wjzzh" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.109864 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.124904 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.134872 4606 generic.go:334] "Generic (PLEG): container finished" podID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerID="2371365fe1d7ea6a8aa02a1387d8d67fbc8787bdd05c88cca89c4e3dbe5a0597" exitCode=137 Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.135621 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"2382f522199b4d6e4b9e5dd15acb943accc7e5df8076c8c32c30cca10ddb913c"} pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" containerMessage="Container manager failed liveness probe, will be restarted" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.135664 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" podUID="62e61905-d16f-41cc-b972-f788ced5c022" containerName="manager" containerID="cri-o://2382f522199b4d6e4b9e5dd15acb943accc7e5df8076c8c32c30cca10ddb913c" gracePeriod=10 Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.135694 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerDied","Data":"2371365fe1d7ea6a8aa02a1387d8d67fbc8787bdd05c88cca89c4e3dbe5a0597"} Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.140215 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.140538 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-crtm2" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.162159 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.190327 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2c6nx" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.336548 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.336944 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.337115 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.337275 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.340743 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.340907 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.364066 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" podUID="d78dc70d-59e8-4235-b345-f523b414fb31" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.39:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.380368 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.380645 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.420821 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.421036 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.438877 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.439130 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.481434 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cxp6x" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.481616 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v5b5x" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.489448 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.504978 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.507161 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.550868 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.551077 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.577424 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.611947 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.616087 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="965dca6f-f73d-4ee0-a511-91793448cc67" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.617163 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.617243 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="684acded-8efb-47ea-8bb4-0212874a6826" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.617413 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="684acded-8efb-47ea-8bb4-0212874a6826" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.637068 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" podUID="4d18bd93-7560-4405-9738-38876237699e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.638051 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" podUID="4d18bd93-7560-4405-9738-38876237699e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.672124 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.672175 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5rrnn" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.672128 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.695067 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.786962 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.796985 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.802335 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.802707 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.819914 4606 request.go:700] Waited for 1.933937868s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=59406 Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.820396 4606 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.820400 4606 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xvsfn" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.897640 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.897842 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.897911 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.902216 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.906226 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.924475 4606 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.936122 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.955571 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.989073 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 17:36:31 crc kubenswrapper[4606]: I0123 17:36:31.989315 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.007508 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.034844 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.063915 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.081900 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.081921 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h5glf" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.100516 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.116482 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.138820 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.164958 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.172618 4606 generic.go:334] "Generic (PLEG): container finished" podID="6ab09cf8-68a3-4fac-aa7b-824a2626b3db" containerID="4028760cdb68286d3bddbb85c782432df58ce062008e35f1e686b3b5f7adc898" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.172635 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" event={"ID":"6ab09cf8-68a3-4fac-aa7b-824a2626b3db","Type":"ContainerDied","Data":"4028760cdb68286d3bddbb85c782432df58ce062008e35f1e686b3b5f7adc898"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.173412 4606 scope.go:117] "RemoveContainer" containerID="4028760cdb68286d3bddbb85c782432df58ce062008e35f1e686b3b5f7adc898" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.178708 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.186716 4606 generic.go:334] "Generic (PLEG): container finished" podID="29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8" containerID="550738b9d7e7c9b9802e2298b675134b434d186cffe338fd98704e835a9bd21c" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.186826 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" event={"ID":"29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8","Type":"ContainerDied","Data":"550738b9d7e7c9b9802e2298b675134b434d186cffe338fd98704e835a9bd21c"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.187637 4606 scope.go:117] "RemoveContainer" containerID="550738b9d7e7c9b9802e2298b675134b434d186cffe338fd98704e835a9bd21c" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.189998 4606 generic.go:334] "Generic (PLEG): container finished" podID="c58688d9-32d0-4a57-a216-1110af3dd7dc" containerID="bc3c107e33675eadf9c74bf6862e2ef921ddab8aa6a6790d1749b73eee75384f" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.190155 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" event={"ID":"c58688d9-32d0-4a57-a216-1110af3dd7dc","Type":"ContainerDied","Data":"bc3c107e33675eadf9c74bf6862e2ef921ddab8aa6a6790d1749b73eee75384f"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.190820 4606 scope.go:117] "RemoveContainer" containerID="bc3c107e33675eadf9c74bf6862e2ef921ddab8aa6a6790d1749b73eee75384f" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.208625 4606 generic.go:334] "Generic (PLEG): container finished" podID="db44bfcf-35b0-4cd1-baef-f9e1879e863d" containerID="96029f98af57c98fd12f125cbbd373048ad0a5fe173644f77dd94ed526203696" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.208710 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" event={"ID":"db44bfcf-35b0-4cd1-baef-f9e1879e863d","Type":"ContainerDied","Data":"96029f98af57c98fd12f125cbbd373048ad0a5fe173644f77dd94ed526203696"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.209411 4606 scope.go:117] "RemoveContainer" containerID="96029f98af57c98fd12f125cbbd373048ad0a5fe173644f77dd94ed526203696" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.214189 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.220173 4606 generic.go:334] "Generic (PLEG): container finished" podID="14094ea9-8be6-405b-a990-9bdbda2a50cb" containerID="ca7cb9d83e6b712f445bea1bd445fd5730be210d6d99a6d1820cd7289746c110" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.220247 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" event={"ID":"14094ea9-8be6-405b-a990-9bdbda2a50cb","Type":"ContainerDied","Data":"ca7cb9d83e6b712f445bea1bd445fd5730be210d6d99a6d1820cd7289746c110"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.220870 4606 scope.go:117] "RemoveContainer" containerID="ca7cb9d83e6b712f445bea1bd445fd5730be210d6d99a6d1820cd7289746c110" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.223500 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.236723 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.237142 4606 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.237298 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.238027 4606 scope.go:117] "RemoveContainer" containerID="76b6759dd071c31c1b878ddbe21201b5fd3f0a403bf07f53d75170b664f43931" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.250911 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.256903 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.267078 4606 generic.go:334] "Generic (PLEG): container finished" podID="194942ca-b627-42a1-9491-9e00b44d51d7" containerID="98e6e7318bdf75a9d67939ec7da3d523f345b87462b82f158c1159ef41cf7013" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.267557 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" event={"ID":"194942ca-b627-42a1-9491-9e00b44d51d7","Type":"ContainerDied","Data":"98e6e7318bdf75a9d67939ec7da3d523f345b87462b82f158c1159ef41cf7013"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.268711 4606 scope.go:117] "RemoveContainer" containerID="98e6e7318bdf75a9d67939ec7da3d523f345b87462b82f158c1159ef41cf7013" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.278409 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.287579 4606 generic.go:334] "Generic (PLEG): container finished" podID="7e33ab4c-7c48-478a-ae66-119b3ba39b19" containerID="19d38761cbee73fe606be2b1b89967a01f2c86c0a2d8c1d752e461d9f86dbcf8" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.289772 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" event={"ID":"7e33ab4c-7c48-478a-ae66-119b3ba39b19","Type":"ContainerDied","Data":"19d38761cbee73fe606be2b1b89967a01f2c86c0a2d8c1d752e461d9f86dbcf8"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.290492 4606 scope.go:117] "RemoveContainer" containerID="19d38761cbee73fe606be2b1b89967a01f2c86c0a2d8c1d752e461d9f86dbcf8" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.297074 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.308014 4606 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.308067 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.308110 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.318983 4606 generic.go:334] "Generic (PLEG): container finished" podID="23ed80ea-dfab-4552-86d1-972563d8d4b6" containerID="201ea2b9f34b84d8dc39d1249dbc35f1aba6f960dfcad7e65448a095f235de8c" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.319110 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" event={"ID":"23ed80ea-dfab-4552-86d1-972563d8d4b6","Type":"ContainerDied","Data":"201ea2b9f34b84d8dc39d1249dbc35f1aba6f960dfcad7e65448a095f235de8c"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.319862 4606 scope.go:117] "RemoveContainer" containerID="201ea2b9f34b84d8dc39d1249dbc35f1aba6f960dfcad7e65448a095f235de8c" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.322123 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.340865 4606 generic.go:334] "Generic (PLEG): container finished" podID="566e35f9-5c83-4267-bcf6-898458e0a722" containerID="936fd703289b400be2e820f661b3d1938292f4620df7e1c8f3d13cbaec3d0d31" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.340941 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" event={"ID":"566e35f9-5c83-4267-bcf6-898458e0a722","Type":"ContainerDied","Data":"936fd703289b400be2e820f661b3d1938292f4620df7e1c8f3d13cbaec3d0d31"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.341650 4606 scope.go:117] "RemoveContainer" containerID="936fd703289b400be2e820f661b3d1938292f4620df7e1c8f3d13cbaec3d0d31" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.378055 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rptcj" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.378144 4606 generic.go:334] "Generic (PLEG): container finished" podID="f8394efb-8969-47c9-98ec-b9845c0c8583" containerID="ac2db089da535252f89fd9751b17120303a08374a56deaa33f58203397b68345" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.378390 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.381640 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.389571 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" event={"ID":"f8394efb-8969-47c9-98ec-b9845c0c8583","Type":"ContainerDied","Data":"ac2db089da535252f89fd9751b17120303a08374a56deaa33f58203397b68345"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.390197 4606 scope.go:117] "RemoveContainer" containerID="ac2db089da535252f89fd9751b17120303a08374a56deaa33f58203397b68345" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.406054 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.407399 4606 generic.go:334] "Generic (PLEG): container finished" podID="9ae13310-86a4-4a3f-9967-cb88d1bfa536" containerID="4757c65b3ffd5c6be2c30c0e6b1e7dc3a563783965837e3f1fced5bb323d6ae1" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.407519 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" event={"ID":"9ae13310-86a4-4a3f-9967-cb88d1bfa536","Type":"ContainerDied","Data":"4757c65b3ffd5c6be2c30c0e6b1e7dc3a563783965837e3f1fced5bb323d6ae1"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.408267 4606 scope.go:117] "RemoveContainer" containerID="4757c65b3ffd5c6be2c30c0e6b1e7dc3a563783965837e3f1fced5bb323d6ae1" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.435453 4606 generic.go:334] "Generic (PLEG): container finished" podID="7215e9f7-67ae-40ad-befc-e411c5acb586" containerID="b7f3835ba76ab5d60ac6d1f1c813eebc94f58fd393d383deceae75c76b36c1f7" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.435552 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" event={"ID":"7215e9f7-67ae-40ad-befc-e411c5acb586","Type":"ContainerDied","Data":"b7f3835ba76ab5d60ac6d1f1c813eebc94f58fd393d383deceae75c76b36c1f7"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.436164 4606 scope.go:117] "RemoveContainer" containerID="b7f3835ba76ab5d60ac6d1f1c813eebc94f58fd393d383deceae75c76b36c1f7" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.441923 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.446756 4606 generic.go:334] "Generic (PLEG): container finished" podID="3112e8c9-b0cb-4d5e-bba9-1caf429b45dc" containerID="d951b30c33403956bf3327f2c4667fdd0f820a95d1626cad4c04589b5542e7c1" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.446816 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" event={"ID":"3112e8c9-b0cb-4d5e-bba9-1caf429b45dc","Type":"ContainerDied","Data":"d951b30c33403956bf3327f2c4667fdd0f820a95d1626cad4c04589b5542e7c1"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.447406 4606 scope.go:117] "RemoveContainer" containerID="d951b30c33403956bf3327f2c4667fdd0f820a95d1626cad4c04589b5542e7c1" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.449206 4606 generic.go:334] "Generic (PLEG): container finished" podID="ae465942-87ad-4a76-8ead-5367413aa9a1" containerID="39a6130538689482a2cd5a074a06d172c646a93ad9ab688144748331b23b02ec" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.449246 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" event={"ID":"ae465942-87ad-4a76-8ead-5367413aa9a1","Type":"ContainerDied","Data":"39a6130538689482a2cd5a074a06d172c646a93ad9ab688144748331b23b02ec"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.449528 4606 scope.go:117] "RemoveContainer" containerID="39a6130538689482a2cd5a074a06d172c646a93ad9ab688144748331b23b02ec" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.450940 4606 generic.go:334] "Generic (PLEG): container finished" podID="5cfbb45f-ca05-4e78-8392-b905640374fd" containerID="7b2a4ea60e33d319d388f74e75810afbe6a8a7b99ab453e49f05212485e75dba" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.450977 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" event={"ID":"5cfbb45f-ca05-4e78-8392-b905640374fd","Type":"ContainerDied","Data":"7b2a4ea60e33d319d388f74e75810afbe6a8a7b99ab453e49f05212485e75dba"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.451250 4606 scope.go:117] "RemoveContainer" containerID="7b2a4ea60e33d319d388f74e75810afbe6a8a7b99ab453e49f05212485e75dba" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.471036 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.471255 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.483919 4606 generic.go:334] "Generic (PLEG): container finished" podID="d040f5d8-e2a8-4171-9f70-797e87f8af96" containerID="97dde886bc9eae41c3e14ff9a1b3975bfc487e0a2c12ee5d3f8983e1c6d8edf6" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.483990 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" event={"ID":"d040f5d8-e2a8-4171-9f70-797e87f8af96","Type":"ContainerDied","Data":"97dde886bc9eae41c3e14ff9a1b3975bfc487e0a2c12ee5d3f8983e1c6d8edf6"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.484422 4606 scope.go:117] "RemoveContainer" containerID="97dde886bc9eae41c3e14ff9a1b3975bfc487e0a2c12ee5d3f8983e1c6d8edf6" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.515852 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-njw5g" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.518327 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.518616 4606 generic.go:334] "Generic (PLEG): container finished" podID="62e61905-d16f-41cc-b972-f788ced5c022" containerID="2382f522199b4d6e4b9e5dd15acb943accc7e5df8076c8c32c30cca10ddb913c" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.518715 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" event={"ID":"62e61905-d16f-41cc-b972-f788ced5c022","Type":"ContainerDied","Data":"2382f522199b4d6e4b9e5dd15acb943accc7e5df8076c8c32c30cca10ddb913c"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.518792 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" event={"ID":"62e61905-d16f-41cc-b972-f788ced5c022","Type":"ContainerStarted","Data":"17c9a8c39d1bfe9a3fb509242f878558b5c0b7fc02f92296d7dfb7d88906d5ea"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.519652 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.536829 4606 generic.go:334] "Generic (PLEG): container finished" podID="4d18bd93-7560-4405-9738-38876237699e" containerID="9e69fef21e5bbfa926ad36eb7c635a46b59eb0f3dbc968d43b5451a61c263e50" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.536962 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" event={"ID":"4d18bd93-7560-4405-9738-38876237699e","Type":"ContainerDied","Data":"9e69fef21e5bbfa926ad36eb7c635a46b59eb0f3dbc968d43b5451a61c263e50"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.537028 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.537626 4606 scope.go:117] "RemoveContainer" containerID="9e69fef21e5bbfa926ad36eb7c635a46b59eb0f3dbc968d43b5451a61c263e50" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.547830 4606 generic.go:334] "Generic (PLEG): container finished" podID="cb8c2204-661e-4daf-b748-c329e81e04a8" containerID="6cd0ceffa4048046ab1e26ebd96b93a49a312d6196aa13bc66a1ecd0834dafc3" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.548095 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" event={"ID":"cb8c2204-661e-4daf-b748-c329e81e04a8","Type":"ContainerDied","Data":"6cd0ceffa4048046ab1e26ebd96b93a49a312d6196aa13bc66a1ecd0834dafc3"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.548893 4606 scope.go:117] "RemoveContainer" containerID="6cd0ceffa4048046ab1e26ebd96b93a49a312d6196aa13bc66a1ecd0834dafc3" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.562903 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" podUID="d78dc70d-59e8-4235-b345-f523b414fb31" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.562934 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.576856 4606 generic.go:334] "Generic (PLEG): container finished" podID="1f333c20-09e6-410a-88d1-b2e6bfc73768" containerID="5c4b463f064388c72e876d748c97ee07566fee10b767c2a4200d372eec4af13a" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.576969 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" event={"ID":"1f333c20-09e6-410a-88d1-b2e6bfc73768","Type":"ContainerDied","Data":"5c4b463f064388c72e876d748c97ee07566fee10b767c2a4200d372eec4af13a"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.577667 4606 scope.go:117] "RemoveContainer" containerID="5c4b463f064388c72e876d748c97ee07566fee10b767c2a4200d372eec4af13a" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.593989 4606 generic.go:334] "Generic (PLEG): container finished" podID="9ca388c9-7c19-4efb-93da-2d889c972fa7" containerID="fb6bb4f275a5cf666f03c01b72d9164b4bc8e9ec4fe9086f6b23b06f2caaa70f" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.594102 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" event={"ID":"9ca388c9-7c19-4efb-93da-2d889c972fa7","Type":"ContainerDied","Data":"fb6bb4f275a5cf666f03c01b72d9164b4bc8e9ec4fe9086f6b23b06f2caaa70f"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.595208 4606 scope.go:117] "RemoveContainer" containerID="fb6bb4f275a5cf666f03c01b72d9164b4bc8e9ec4fe9086f6b23b06f2caaa70f" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.604796 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.610994 4606 generic.go:334] "Generic (PLEG): container finished" podID="96e5558f-180a-4f7a-aee6-213fb576e016" containerID="be18ecd4ee2a147eaf3c98e1de22e02afd4d2c51af55b7eb5b320622c5bd659b" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.611055 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" event={"ID":"96e5558f-180a-4f7a-aee6-213fb576e016","Type":"ContainerDied","Data":"be18ecd4ee2a147eaf3c98e1de22e02afd4d2c51af55b7eb5b320622c5bd659b"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.611669 4606 scope.go:117] "RemoveContainer" containerID="be18ecd4ee2a147eaf3c98e1de22e02afd4d2c51af55b7eb5b320622c5bd659b" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.611838 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.623700 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5hx8t" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.627873 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.631035 4606 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.635132 4606 generic.go:334] "Generic (PLEG): container finished" podID="9b69bbaa-8277-48b5-b0aa-f8d7b7fec647" containerID="e5b65d866fef26a36bc70bc57dc8c2496dd193544d734f3d05578ebe1d08dab8" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.635396 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" event={"ID":"9b69bbaa-8277-48b5-b0aa-f8d7b7fec647","Type":"ContainerDied","Data":"e5b65d866fef26a36bc70bc57dc8c2496dd193544d734f3d05578ebe1d08dab8"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.636361 4606 scope.go:117] "RemoveContainer" containerID="e5b65d866fef26a36bc70bc57dc8c2496dd193544d734f3d05578ebe1d08dab8" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.658014 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.671295 4606 generic.go:334] "Generic (PLEG): container finished" podID="7671c11a-cd18-4409-addc-caeb99b22cae" containerID="d31b3b01761301f3010c7aab892d128e8f582e7dbdc3451c6681ac02e0502dc4" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.671387 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" event={"ID":"7671c11a-cd18-4409-addc-caeb99b22cae","Type":"ContainerDied","Data":"d31b3b01761301f3010c7aab892d128e8f582e7dbdc3451c6681ac02e0502dc4"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.672013 4606 scope.go:117] "RemoveContainer" containerID="d31b3b01761301f3010c7aab892d128e8f582e7dbdc3451c6681ac02e0502dc4" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.672614 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.687245 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.698933 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.702250 4606 generic.go:334] "Generic (PLEG): container finished" podID="f542164e-05c2-4746-988b-0b4676772078" containerID="2415c450d942872f3918f80fea9a1a63de4308a39c93bd4c9a76ea8e687cf21f" exitCode=0 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.702344 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" event={"ID":"f542164e-05c2-4746-988b-0b4676772078","Type":"ContainerDied","Data":"2415c450d942872f3918f80fea9a1a63de4308a39c93bd4c9a76ea8e687cf21f"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.737995 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.740686 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.764024 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.768054 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.779456 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.779548 4606 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a402b82855acb2e82d07fb32eb7ed90ca465305d6fbd7e170f327f68b803be5f" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.779643 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a402b82855acb2e82d07fb32eb7ed90ca465305d6fbd7e170f327f68b803be5f"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.779690 4606 scope.go:117] "RemoveContainer" containerID="a1af2a9a04438c635dc72c7270b10a948677b8a8c61f9915e0914d454c17fe96" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.780634 4606 scope.go:117] "RemoveContainer" containerID="a402b82855acb2e82d07fb32eb7ed90ca465305d6fbd7e170f327f68b803be5f" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.823245 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.830707 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.831975 4606 request.go:700] Waited for 2.850649424s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=59065 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.835569 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.839832 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5kppl" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.855604 4606 generic.go:334] "Generic (PLEG): container finished" podID="0ba239ef-03fb-4744-95cb-5aea8d9b0663" containerID="ce552519b70cfa075c326d7d2f49e96964f32489546902f3b108e31504c466e4" exitCode=1 Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.855694 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" event={"ID":"0ba239ef-03fb-4744-95cb-5aea8d9b0663","Type":"ContainerDied","Data":"ce552519b70cfa075c326d7d2f49e96964f32489546902f3b108e31504c466e4"} Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.856446 4606 scope.go:117] "RemoveContainer" containerID="ce552519b70cfa075c326d7d2f49e96964f32489546902f3b108e31504c466e4" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.860883 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.888554 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.898839 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.922519 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.944047 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.966525 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.983133 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 17:36:32 crc kubenswrapper[4606]: I0123 17:36:32.997946 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.020903 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.033686 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kfww6" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.060262 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.073256 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gw2ls" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.093633 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.113172 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.133966 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xzzh6" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.152943 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.177575 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.193118 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xr9l2" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.214319 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7sdg2" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.234438 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.253159 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.278237 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.300035 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.313250 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.334524 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2m76w" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.354350 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.374114 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.396469 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.413576 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.445705 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.460978 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.480865 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.494013 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.517419 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.517437 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.533888 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.554305 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.574701 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.594703 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.613704 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.620755 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="684acded-8efb-47ea-8bb4-0212874a6826" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.621327 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.628114 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="684acded-8efb-47ea-8bb4-0212874a6826" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.632991 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.653227 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.676635 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.694130 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vwp4v" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.713206 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.741914 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5n9fm" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.753538 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-d265k" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.779876 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.796251 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.827546 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.832691 4606 request.go:700] Waited for 3.737799777s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=59314 Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.838651 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.858923 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.875949 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.899741 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.915767 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.933339 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.954309 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j4bws" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.979633 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 17:36:33 crc kubenswrapper[4606]: I0123 17:36:33.993733 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.014512 4606 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-v8dvx" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.033736 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.055804 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.073281 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.093694 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.117658 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-r67dr" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.134783 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fgd6m" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.160496 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.173320 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.194138 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5dlz9" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.222328 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.233884 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.254742 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.276217 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.293969 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.314135 4606 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.335304 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.354836 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.375055 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.392940 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.418632 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.434085 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.454340 4606 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hwpd5" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.473313 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.497262 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.512893 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-65vwn" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.534020 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.554313 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.573795 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.593522 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.614895 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.635097 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.655196 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.674820 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.695940 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qpkp2" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.716195 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.737110 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.754394 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.776192 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.793497 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.813618 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.834562 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.852013 4606 request.go:700] Waited for 4.672275288s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=59024 Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.858100 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.873669 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.907673 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.926366 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.936900 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.953455 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 17:36:34 crc kubenswrapper[4606]: I0123 17:36:34.973236 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.002821 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.023233 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kpm7l" event={"ID":"f542164e-05c2-4746-988b-0b4676772078","Type":"ContainerStarted","Data":"69cad1de8352cb852db7b15945a395dd5d46323c7b46a1650debd8310e70cd4e"} Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.024694 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.036083 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.039364 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" event={"ID":"db44bfcf-35b0-4cd1-baef-f9e1879e863d","Type":"ContainerStarted","Data":"109c6d0a40a568b5a04d7fcc17fc8b9c8b2cd44039cab1ace25a052063c04da6"} Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.041049 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.043986 4606 generic.go:334] "Generic (PLEG): container finished" podID="746ecbf9-06c9-43d1-91fe-2269b27278bb" containerID="6f8579b993e8963f8b3f3a45e04ca9066caae74fe34595a6d2f938869f0bc586" exitCode=1 Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.044234 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-69xjr" event={"ID":"746ecbf9-06c9-43d1-91fe-2269b27278bb","Type":"ContainerDied","Data":"6f8579b993e8963f8b3f3a45e04ca9066caae74fe34595a6d2f938869f0bc586"} Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.045067 4606 scope.go:117] "RemoveContainer" containerID="6f8579b993e8963f8b3f3a45e04ca9066caae74fe34595a6d2f938869f0bc586" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.052748 4606 generic.go:334] "Generic (PLEG): container finished" podID="746ca208-2336-44fb-b3e3-be8fa94e33eb" containerID="9cf330764c5d6ebf49c6f99641e1b6a5890e215fa61fdb8b7818fec16d979c4d" exitCode=1 Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.052807 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" event={"ID":"746ca208-2336-44fb-b3e3-be8fa94e33eb","Type":"ContainerDied","Data":"9cf330764c5d6ebf49c6f99641e1b6a5890e215fa61fdb8b7818fec16d979c4d"} Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.053610 4606 scope.go:117] "RemoveContainer" containerID="9cf330764c5d6ebf49c6f99641e1b6a5890e215fa61fdb8b7818fec16d979c4d" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.057086 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-v88zl" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.083045 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.095026 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.115864 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.134714 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.159034 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.176526 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.195652 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.222302 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.256000 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.256283 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.275858 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.293869 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.313779 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.333706 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.366041 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ffzkl" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.379739 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.397057 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.416826 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.435217 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.456737 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.474210 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.494141 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.515393 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.533767 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.558869 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.574896 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.599398 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.613522 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.633133 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.671083 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-f5qpg" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.674983 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.694382 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.714336 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.739364 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.769151 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.779647 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.794009 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.814174 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.834704 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.855080 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.857096 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.871337 4606 request.go:700] Waited for 5.590499501s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-neutron-ovndbs&resourceVersion=59277 Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.875500 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.895574 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.913135 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.934117 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gslrz" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.957351 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.974063 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 17:36:35 crc kubenswrapper[4606]: I0123 17:36:35.993425 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.016703 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9lpqn" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.039639 4606 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.054650 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.064081 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" event={"ID":"7671c11a-cd18-4409-addc-caeb99b22cae","Type":"ContainerStarted","Data":"f8a78246911b8f5374542654ba8fa3db97fc7f8765e4be4a7e81506dee62de77"} Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.064382 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.067013 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" event={"ID":"6ab09cf8-68a3-4fac-aa7b-824a2626b3db","Type":"ContainerStarted","Data":"83688c2860ae91c85763b250e5f6506055c7d26e6f07f61673b03400cf64630d"} Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.067224 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.069561 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" event={"ID":"d040f5d8-e2a8-4171-9f70-797e87f8af96","Type":"ContainerStarted","Data":"5eb98d4487c905242b91a48f29d89d68c446d2ff523c885e8cde087dae40b395"} Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.069816 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.073207 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.073342 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerStarted","Data":"b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e"} Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.076645 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.093484 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.113772 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.134755 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.156164 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 17:36:36 crc kubenswrapper[4606]: I0123 17:36:36.177544 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.104847 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" event={"ID":"5cfbb45f-ca05-4e78-8392-b905640374fd","Type":"ContainerStarted","Data":"06bb26db9b0a1d8558dcff6c9ff1717526a70d353ea1c7d333d7b01fe28f6e96"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.105934 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.119333 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" event={"ID":"7e33ab4c-7c48-478a-ae66-119b3ba39b19","Type":"ContainerStarted","Data":"ad05b1851f422957aeda00945f4f50552a64df666bf2adb9956da463ef4ddc07"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.120272 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.126496 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" event={"ID":"9ca388c9-7c19-4efb-93da-2d889c972fa7","Type":"ContainerStarted","Data":"e3fea800065a31c7ea9718a813a36cd6ea48a1dfbf78098bfd93022218e4b1f5"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.127615 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.130036 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.141214 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"03a5227f2274ad5da7a6a4e6e8ee12898a5aec7d2e34a90c7683cfd203f938be"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.145218 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" event={"ID":"3112e8c9-b0cb-4d5e-bba9-1caf429b45dc","Type":"ContainerStarted","Data":"9429c8a7fabd516fd70c2fcee7a48aae8b6c04d9157b8879dace3b2fff0a2cef"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.145636 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.151230 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nxh4d" event={"ID":"746ca208-2336-44fb-b3e3-be8fa94e33eb","Type":"ContainerStarted","Data":"54f51fd680066e23110cbf34bcfcae22c2beff6fba1993b2c67d5b0c462f9f55"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.170709 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" event={"ID":"29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8","Type":"ContainerStarted","Data":"7b33c5dc6944302a4e49175eab49e68d4750c34b28bf468282ca8ec5e949c336"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.171496 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.186379 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tbffj" event={"ID":"f8394efb-8969-47c9-98ec-b9845c0c8583","Type":"ContainerStarted","Data":"c78d167fdea91e6b3f8cd45859cbda19955d204bd82d1a7c733e4d9ade7d8183"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.202972 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" event={"ID":"14094ea9-8be6-405b-a990-9bdbda2a50cb","Type":"ContainerStarted","Data":"b9aa31a815f8d62327524dda25d57eaeb9681fb90204d03554964fc4ad820158"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.203893 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.222239 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" event={"ID":"7215e9f7-67ae-40ad-befc-e411c5acb586","Type":"ContainerStarted","Data":"6e7976874b656b6cfef9effc221c04c2c236c894c5f55adfac140659ab2dd7cd"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.222961 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.224284 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" event={"ID":"194942ca-b627-42a1-9491-9e00b44d51d7","Type":"ContainerStarted","Data":"5f9252ea64a47b1e6f2300e679e7682266e383f01c8cf0c24692f1319016142e"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.224783 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.226686 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" event={"ID":"1f333c20-09e6-410a-88d1-b2e6bfc73768","Type":"ContainerStarted","Data":"bc130d9b901d391137567dfdeed1abdf6289d7549d908fa14372a6d75c86bfb0"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.227179 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.228564 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" event={"ID":"cb8c2204-661e-4daf-b748-c329e81e04a8","Type":"ContainerStarted","Data":"c84487eb650c42da5b9a1fff1de8e03f892709bbbc704252e7377218b92b042c"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.229028 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.236146 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" event={"ID":"96e5558f-180a-4f7a-aee6-213fb576e016","Type":"ContainerStarted","Data":"732a3eafaa8c8f477ddc6c6c62d9a37c14ed1349fec815cb609c71aa8b3804e3"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.236984 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.246793 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" event={"ID":"0ba239ef-03fb-4744-95cb-5aea8d9b0663","Type":"ContainerStarted","Data":"88b825d5038d92fb55a4d3d034e4faa39aec6beaa7538add3059baaee917bf64"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.247308 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.257605 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" event={"ID":"9ae13310-86a4-4a3f-9967-cb88d1bfa536","Type":"ContainerStarted","Data":"c33ea422e9f7523dc2dd54747d83c359840cda0ab14d8156e87db7d8496adb9b"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.258416 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.265156 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-69xjr" event={"ID":"746ecbf9-06c9-43d1-91fe-2269b27278bb","Type":"ContainerStarted","Data":"31d29e3f9de01f9ccb11eec891d356dd69c31fdd6b9f7099a026fbb4de48f459"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.270386 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" event={"ID":"4d18bd93-7560-4405-9738-38876237699e","Type":"ContainerStarted","Data":"a43ccfc11480cc6409e2ed94738054c2d3679ba0971051b3e0e728b7bff20380"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.270695 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.273639 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.274693 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"796c479c909a9a173af687c8e0f1c54b90f45be5d2f548dc34244aec1ed44deb"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.277942 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" event={"ID":"23ed80ea-dfab-4552-86d1-972563d8d4b6","Type":"ContainerStarted","Data":"20416a79528f4ae5bd8153e5984b5347f7026176abfacf909408967825da1288"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.278155 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.283382 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" event={"ID":"c58688d9-32d0-4a57-a216-1110af3dd7dc","Type":"ContainerStarted","Data":"17e898a1b41db770cc4a2eab4767c5dd33180c8acdd810b4157c5e5b0534ada6"} Jan 23 17:36:37 crc kubenswrapper[4606]: I0123 17:36:37.284032 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.297236 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" event={"ID":"566e35f9-5c83-4267-bcf6-898458e0a722","Type":"ContainerStarted","Data":"5933ab1817823a2a721bce86768fca76948864d604bd1be8f7d14123f204a67d"} Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.298170 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.300620 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" event={"ID":"ae465942-87ad-4a76-8ead-5367413aa9a1","Type":"ContainerStarted","Data":"9f19ac7cdf5d60b4c058096616c590d7d0cdc3db776c5384e71addd3a68404af"} Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.301461 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.304537 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" event={"ID":"9b69bbaa-8277-48b5-b0aa-f8d7b7fec647","Type":"ContainerStarted","Data":"c8d6b0bbf6f6dc890073e40ec8423f8311c4c938d79bbc9dd8949c5bf705d916"} Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.306388 4606 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" containerID="cri-o://e5b65d866fef26a36bc70bc57dc8c2496dd193544d734f3d05578ebe1d08dab8" Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.306418 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.774311 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:36:38 crc kubenswrapper[4606]: I0123 17:36:38.837135 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:36:39 crc kubenswrapper[4606]: I0123 17:36:39.308294 4606 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 17:36:39 crc kubenswrapper[4606]: I0123 17:36:39.325036 4606 generic.go:334] "Generic (PLEG): container finished" podID="9110ffb1-da7c-4a1b-95b4-28f757be8ace" containerID="9433b6beddea9f6dcb24c32a229cdaba021f722863a78a3283372f8711f0910e" exitCode=0 Jan 23 17:36:39 crc kubenswrapper[4606]: I0123 17:36:39.325357 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" event={"ID":"9110ffb1-da7c-4a1b-95b4-28f757be8ace","Type":"ContainerDied","Data":"9433b6beddea9f6dcb24c32a229cdaba021f722863a78a3283372f8711f0910e"} Jan 23 17:36:39 crc kubenswrapper[4606]: I0123 17:36:39.326104 4606 scope.go:117] "RemoveContainer" containerID="9433b6beddea9f6dcb24c32a229cdaba021f722863a78a3283372f8711f0910e" Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.392544 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" event={"ID":"9110ffb1-da7c-4a1b-95b4-28f757be8ace","Type":"ContainerStarted","Data":"a2a2716214b8ba75c3da16385b61a9b260895848df91f11aa7deb675adda697a"} Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.393550 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.394030 4606 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cdqb6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.394082 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" podUID="9110ffb1-da7c-4a1b-95b4-28f757be8ace" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.616078 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="684acded-8efb-47ea-8bb4-0212874a6826" containerName="galera" probeResult="failure" output="command timed out" Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.616163 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.616910 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"12d65ec89de143ddedba0dbde04e69a534f16a67857788a3634870ead402de77"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.915788 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="684acded-8efb-47ea-8bb4-0212874a6826" containerName="galera" containerID="cri-o://12d65ec89de143ddedba0dbde04e69a534f16a67857788a3634870ead402de77" gracePeriod=30 Jan 23 17:36:40 crc kubenswrapper[4606]: I0123 17:36:40.942714 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx" Jan 23 17:36:41 crc kubenswrapper[4606]: I0123 17:36:41.342060 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:36:41 crc kubenswrapper[4606]: I0123 17:36:41.391748 4606 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cdqb6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Jan 23 17:36:41 crc kubenswrapper[4606]: I0123 17:36:41.391802 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" podUID="9110ffb1-da7c-4a1b-95b4-28f757be8ace" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Jan 23 17:36:41 crc kubenswrapper[4606]: I0123 17:36:41.643675 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86756cb559-lfhdj" Jan 23 17:36:42 crc kubenswrapper[4606]: I0123 17:36:42.402268 4606 generic.go:334] "Generic (PLEG): container finished" podID="684acded-8efb-47ea-8bb4-0212874a6826" containerID="12d65ec89de143ddedba0dbde04e69a534f16a67857788a3634870ead402de77" exitCode=0 Jan 23 17:36:42 crc kubenswrapper[4606]: I0123 17:36:42.402324 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"684acded-8efb-47ea-8bb4-0212874a6826","Type":"ContainerDied","Data":"12d65ec89de143ddedba0dbde04e69a534f16a67857788a3634870ead402de77"} Jan 23 17:36:42 crc kubenswrapper[4606]: I0123 17:36:42.402353 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"684acded-8efb-47ea-8bb4-0212874a6826","Type":"ContainerStarted","Data":"96dbbf81260393c1805d3c4cb948f426d68b5b3ba53c2986401a4fec97145911"} Jan 23 17:36:43 crc kubenswrapper[4606]: I0123 17:36:43.366203 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:36:43 crc kubenswrapper[4606]: E0123 17:36:43.366708 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:36:43 crc kubenswrapper[4606]: I0123 17:36:43.517026 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:36:43 crc kubenswrapper[4606]: I0123 17:36:43.520489 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:36:44 crc kubenswrapper[4606]: I0123 17:36:44.528891 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:36:45 crc kubenswrapper[4606]: I0123 17:36:45.171911 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cdqb6" Jan 23 17:36:47 crc kubenswrapper[4606]: I0123 17:36:47.296819 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:36:47 crc kubenswrapper[4606]: I0123 17:36:47.297137 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 17:36:47 crc kubenswrapper[4606]: I0123 17:36:47.298062 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"720145557e6448bd34ede78c5a1d1e667d284ae8ff9b514bfd6b26110e488b7f"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 23 17:36:47 crc kubenswrapper[4606]: I0123 17:36:47.298122 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" containerID="cri-o://720145557e6448bd34ede78c5a1d1e667d284ae8ff9b514bfd6b26110e488b7f" gracePeriod=30 Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.638420 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7c96f" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.749529 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qxt8l" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.778195 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.839199 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6hgqs" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.839503 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gzdtf" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.866175 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-6lkpf" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.919942 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.919989 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.920387 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-f6w2s" Jan 23 17:36:48 crc kubenswrapper[4606]: I0123 17:36:48.962250 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wbg8x" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.161758 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-lc2n5" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.180261 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-qn8hz" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.214637 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lz99v" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.261487 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-dv22m" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.427476 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-kngd9" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.464372 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ftzlw" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.491912 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-5bmr6" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.539065 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.578856 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-ttmg4" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.670672 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h97mh" Jan 23 17:36:49 crc kubenswrapper[4606]: I0123 17:36:49.706064 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-dnlq6" Jan 23 17:36:50 crc kubenswrapper[4606]: I0123 17:36:50.002039 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2z4jl" Jan 23 17:36:50 crc kubenswrapper[4606]: I0123 17:36:50.556561 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8f4d99f8c-6x2sb" Jan 23 17:36:50 crc kubenswrapper[4606]: I0123 17:36:50.586123 4606 generic.go:334] "Generic (PLEG): container finished" podID="29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" containerID="0fbc2f68af54813be8da9edff9b8eb9b04be22c7727939d9a55442f6e487e7d0" exitCode=2 Jan 23 17:36:50 crc kubenswrapper[4606]: I0123 17:36:50.586170 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" event={"ID":"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5","Type":"ContainerDied","Data":"0fbc2f68af54813be8da9edff9b8eb9b04be22c7727939d9a55442f6e487e7d0"} Jan 23 17:36:50 crc kubenswrapper[4606]: I0123 17:36:50.678495 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-jmspw" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.077894 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.153988 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-1\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154058 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-1\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154081 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-combined-ca-bundle\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154096 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-ssh-key-openstack-edpm-ipam\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154172 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-extra-config-0\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154202 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-0\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154331 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl6pq\" (UniqueName: \"kubernetes.io/projected/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-kube-api-access-bl6pq\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154360 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-inventory\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.154384 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-0\") pod \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\" (UID: \"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5\") " Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.165982 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-kube-api-access-bl6pq" (OuterVolumeSpecName: "kube-api-access-bl6pq") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "kube-api-access-bl6pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.166616 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.192779 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-inventory" (OuterVolumeSpecName: "inventory") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.196844 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.198548 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.215948 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.223260 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.223746 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.226850 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" (UID: "29ce98a4-4d13-42c9-84f4-b77dd0b80bf5"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257413 4606 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257458 4606 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257471 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl6pq\" (UniqueName: \"kubernetes.io/projected/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-kube-api-access-bl6pq\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257480 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257489 4606 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257498 4606 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257507 4606 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257518 4606 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.257528 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29ce98a4-4d13-42c9-84f4-b77dd0b80bf5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.603533 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" event={"ID":"29ce98a4-4d13-42c9-84f4-b77dd0b80bf5","Type":"ContainerDied","Data":"dc0cd55b3b56baedac887a6b1dbfc3efa6946c980a20fd0ecdf44fa1303113cf"} Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.603568 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0cd55b3b56baedac887a6b1dbfc3efa6946c980a20fd0ecdf44fa1303113cf" Jan 23 17:36:52 crc kubenswrapper[4606]: I0123 17:36:52.603879 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l4ls2" Jan 23 17:36:54 crc kubenswrapper[4606]: I0123 17:36:54.367023 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:36:54 crc kubenswrapper[4606]: E0123 17:36:54.367838 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:36:54 crc kubenswrapper[4606]: I0123 17:36:54.632031 4606 generic.go:334] "Generic (PLEG): container finished" podID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerID="720145557e6448bd34ede78c5a1d1e667d284ae8ff9b514bfd6b26110e488b7f" exitCode=0 Jan 23 17:36:54 crc kubenswrapper[4606]: I0123 17:36:54.632114 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a43fb635-59de-4576-bdf4-9eacebc80a85","Type":"ContainerDied","Data":"720145557e6448bd34ede78c5a1d1e667d284ae8ff9b514bfd6b26110e488b7f"} Jan 23 17:36:55 crc kubenswrapper[4606]: I0123 17:36:55.642887 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a43fb635-59de-4576-bdf4-9eacebc80a85","Type":"ContainerStarted","Data":"fb2374e76b93de390c10db03438999db1d11a3f6e2f518977a6743a7df5408c7"} Jan 23 17:36:56 crc kubenswrapper[4606]: I0123 17:36:56.282173 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.074869 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74"] Jan 23 17:37:00 crc kubenswrapper[4606]: E0123 17:37:00.075946 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.075967 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.076252 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ce98a4-4d13-42c9-84f4-b77dd0b80bf5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.077104 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.083324 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.083801 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.084004 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.084296 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.084520 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.084748 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.085444 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.094501 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74"] Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.126459 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.126798 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.126929 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.127016 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.127105 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.127211 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvqd\" (UniqueName: \"kubernetes.io/projected/56db46d4-844a-44eb-b09e-4512ce955b3f-kube-api-access-kfvqd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.127298 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.127432 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.127625 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229003 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvqd\" (UniqueName: \"kubernetes.io/projected/56db46d4-844a-44eb-b09e-4512ce955b3f-kube-api-access-kfvqd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229053 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229251 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229308 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229361 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229384 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229406 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229427 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.229448 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.235427 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.237543 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.237676 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.237815 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.238274 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.238580 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.241761 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.245187 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.247771 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvqd\" (UniqueName: \"kubernetes.io/projected/56db46d4-844a-44eb-b09e-4512ce955b3f-kube-api-access-kfvqd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-67r74\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:00 crc kubenswrapper[4606]: I0123 17:37:00.403297 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:37:01 crc kubenswrapper[4606]: I0123 17:37:01.307058 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:37:01 crc kubenswrapper[4606]: I0123 17:37:01.332666 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74"] Jan 23 17:37:01 crc kubenswrapper[4606]: I0123 17:37:01.697352 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" event={"ID":"56db46d4-844a-44eb-b09e-4512ce955b3f","Type":"ContainerStarted","Data":"9cbf4991dcb129b4183b4f32f85d11dabbd8c142bf2c337a967fa67e7384d221"} Jan 23 17:37:02 crc kubenswrapper[4606]: I0123 17:37:02.739437 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" event={"ID":"56db46d4-844a-44eb-b09e-4512ce955b3f","Type":"ContainerStarted","Data":"081e1c59e3fbdbe4756ade1921ae676618ef6e4c7812a809ebef1f5efecc9d12"} Jan 23 17:37:06 crc kubenswrapper[4606]: I0123 17:37:06.297170 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:37:09 crc kubenswrapper[4606]: I0123 17:37:09.366118 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:37:09 crc kubenswrapper[4606]: E0123 17:37:09.367482 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:37:10 crc kubenswrapper[4606]: I0123 17:37:10.914902 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" podStartSLOduration=10.35145491 podStartE2EDuration="10.91488284s" podCreationTimestamp="2026-01-23 17:37:00 +0000 UTC" firstStartedPulling="2026-01-23 17:37:01.332006819 +0000 UTC m=+2655.332199235" lastFinishedPulling="2026-01-23 17:37:01.895434759 +0000 UTC m=+2655.895627165" observedRunningTime="2026-01-23 17:37:02.763027664 +0000 UTC m=+2656.763220170" watchObservedRunningTime="2026-01-23 17:37:10.91488284 +0000 UTC m=+2664.915075246" Jan 23 17:37:10 crc kubenswrapper[4606]: I0123 17:37:10.926643 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:10 crc kubenswrapper[4606]: I0123 17:37:10.926985 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-notification-agent" containerID="cri-o://dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc" gracePeriod=30 Jan 23 17:37:10 crc kubenswrapper[4606]: I0123 17:37:10.927040 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" containerID="cri-o://b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e" gracePeriod=30 Jan 23 17:37:10 crc kubenswrapper[4606]: I0123 17:37:10.927102 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="sg-core" containerID="cri-o://504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339" gracePeriod=30 Jan 23 17:37:10 crc kubenswrapper[4606]: I0123 17:37:10.927320 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="proxy-httpd" containerID="cri-o://9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285" gracePeriod=30 Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.341487 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.826147 4606 generic.go:334] "Generic (PLEG): container finished" podID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerID="b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e" exitCode=0 Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.826519 4606 generic.go:334] "Generic (PLEG): container finished" podID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerID="9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285" exitCode=0 Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.826532 4606 generic.go:334] "Generic (PLEG): container finished" podID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerID="504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339" exitCode=2 Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.826229 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerDied","Data":"b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e"} Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.826611 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerDied","Data":"9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285"} Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.826631 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerDied","Data":"504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339"} Jan 23 17:37:11 crc kubenswrapper[4606]: I0123 17:37:11.826653 4606 scope.go:117] "RemoveContainer" containerID="2371365fe1d7ea6a8aa02a1387d8d67fbc8787bdd05c88cca89c4e3dbe5a0597" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.516370 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692261 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-scripts\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692601 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-log-httpd\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692657 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vbf9\" (UniqueName: \"kubernetes.io/projected/dd93c16a-9cd2-4011-a291-40070e3bfe74-kube-api-access-8vbf9\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692752 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-ceilometer-tls-certs\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692782 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-sg-core-conf-yaml\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692816 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-combined-ca-bundle\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692877 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-config-data\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.692923 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-run-httpd\") pod \"dd93c16a-9cd2-4011-a291-40070e3bfe74\" (UID: \"dd93c16a-9cd2-4011-a291-40070e3bfe74\") " Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.694695 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.695528 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.701510 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd93c16a-9cd2-4011-a291-40070e3bfe74-kube-api-access-8vbf9" (OuterVolumeSpecName: "kube-api-access-8vbf9") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "kube-api-access-8vbf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.722766 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-scripts" (OuterVolumeSpecName: "scripts") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.754826 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.795278 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.795311 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.795321 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vbf9\" (UniqueName: \"kubernetes.io/projected/dd93c16a-9cd2-4011-a291-40070e3bfe74-kube-api-access-8vbf9\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.795331 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.795339 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd93c16a-9cd2-4011-a291-40070e3bfe74-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.809315 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.833798 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.876696 4606 generic.go:334] "Generic (PLEG): container finished" podID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerID="dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc" exitCode=0 Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.876750 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerDied","Data":"dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc"} Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.877218 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.877345 4606 scope.go:117] "RemoveContainer" containerID="b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.876783 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd93c16a-9cd2-4011-a291-40070e3bfe74","Type":"ContainerDied","Data":"dba45a6c442a0e73dec72fb39340fa6d85da19e66e17a007117d68588d5f26dd"} Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.897231 4606 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.897263 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.912832 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-config-data" (OuterVolumeSpecName: "config-data") pod "dd93c16a-9cd2-4011-a291-40070e3bfe74" (UID: "dd93c16a-9cd2-4011-a291-40070e3bfe74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.960229 4606 scope.go:117] "RemoveContainer" containerID="9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285" Jan 23 17:37:12 crc kubenswrapper[4606]: I0123 17:37:12.999560 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd93c16a-9cd2-4011-a291-40070e3bfe74-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.031820 4606 scope.go:117] "RemoveContainer" containerID="504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.073307 4606 scope.go:117] "RemoveContainer" containerID="dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.101181 4606 scope.go:117] "RemoveContainer" containerID="b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.101682 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e\": container with ID starting with b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e not found: ID does not exist" containerID="b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.101714 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e"} err="failed to get container status \"b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e\": rpc error: code = NotFound desc = could not find container \"b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e\": container with ID starting with b05ddd194eff1f60faac10730a28ba2893bb3f936f1fef7be8465e8ae543af3e not found: ID does not exist" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.101735 4606 scope.go:117] "RemoveContainer" containerID="9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.102635 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285\": container with ID starting with 9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285 not found: ID does not exist" containerID="9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.102661 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285"} err="failed to get container status \"9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285\": rpc error: code = NotFound desc = could not find container \"9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285\": container with ID starting with 9fee9e4ffbc63d557af0741005884e28c618e7c68997aa05c9ecc7f148d3e285 not found: ID does not exist" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.102675 4606 scope.go:117] "RemoveContainer" containerID="504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.102946 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339\": container with ID starting with 504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339 not found: ID does not exist" containerID="504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.102963 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339"} err="failed to get container status \"504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339\": rpc error: code = NotFound desc = could not find container \"504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339\": container with ID starting with 504ea320d660965b8dca144bb49d1bc60214223b4ab27426518376af16315339 not found: ID does not exist" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.102974 4606 scope.go:117] "RemoveContainer" containerID="dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.103190 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc\": container with ID starting with dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc not found: ID does not exist" containerID="dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.103205 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc"} err="failed to get container status \"dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc\": rpc error: code = NotFound desc = could not find container \"dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc\": container with ID starting with dacd7d71ee1d0ac54c36149460274346afec7dfd51a4e6e3420c6358b8953cbc not found: ID does not exist" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.210714 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.224535 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242222 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.242670 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242687 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.242700 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="sg-core" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242707 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="sg-core" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.242717 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="proxy-httpd" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242724 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="proxy-httpd" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.242745 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242751 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: E0123 17:37:13.242763 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-notification-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242769 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-notification-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242959 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242971 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-notification-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242984 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="proxy-httpd" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.242996 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="sg-core" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.243297 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" containerName="ceilometer-central-agent" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.244521 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.253023 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.253216 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.253849 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.262757 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.405822 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-run-httpd\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.405942 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-config-data\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.405982 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-scripts\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.406018 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.406041 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzj4w\" (UniqueName: \"kubernetes.io/projected/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-kube-api-access-gzj4w\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.406094 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.406206 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.406235 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-log-httpd\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.507578 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-config-data\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.507909 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-scripts\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.508022 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.508111 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzj4w\" (UniqueName: \"kubernetes.io/projected/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-kube-api-access-gzj4w\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.508207 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.508347 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.508438 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-log-httpd\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.508568 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-run-httpd\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.509069 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-log-httpd\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.509540 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-run-httpd\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.517475 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.517828 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.517832 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-scripts\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.518495 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.530279 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-config-data\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.532257 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzj4w\" (UniqueName: \"kubernetes.io/projected/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-kube-api-access-gzj4w\") pod \"ceilometer-0\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " pod="openstack/ceilometer-0" Jan 23 17:37:13 crc kubenswrapper[4606]: I0123 17:37:13.578779 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:14 crc kubenswrapper[4606]: I0123 17:37:14.375574 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd93c16a-9cd2-4011-a291-40070e3bfe74" path="/var/lib/kubelet/pods/dd93c16a-9cd2-4011-a291-40070e3bfe74/volumes" Jan 23 17:37:15 crc kubenswrapper[4606]: I0123 17:37:15.883147 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-779944bfcf-5nq4l" Jan 23 17:37:15 crc kubenswrapper[4606]: I0123 17:37:15.927434 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:15 crc kubenswrapper[4606]: W0123 17:37:15.940099 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2530c45_7c9e_4d61_b5bc_6c02d7f40693.slice/crio-fa09385d3f138455c088fc3d1d1ac6ecd92f5a06112d45dc6460edb8cbc20c5b WatchSource:0}: Error finding container fa09385d3f138455c088fc3d1d1ac6ecd92f5a06112d45dc6460edb8cbc20c5b: Status 404 returned error can't find the container with id fa09385d3f138455c088fc3d1d1ac6ecd92f5a06112d45dc6460edb8cbc20c5b Jan 23 17:37:16 crc kubenswrapper[4606]: I0123 17:37:16.350903 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:37:16 crc kubenswrapper[4606]: I0123 17:37:16.925858 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerStarted","Data":"fa09385d3f138455c088fc3d1d1ac6ecd92f5a06112d45dc6460edb8cbc20c5b"} Jan 23 17:37:17 crc kubenswrapper[4606]: E0123 17:37:17.426550 4606 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.22:36662->38.102.83.22:38857: write tcp 38.102.83.22:36662->38.102.83.22:38857: write: connection reset by peer Jan 23 17:37:17 crc kubenswrapper[4606]: I0123 17:37:17.940648 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerStarted","Data":"0f34e81ea8b6fc3502da9f9f548493ed4c0a6442158e4edfe9351386777d4bc5"} Jan 23 17:37:19 crc kubenswrapper[4606]: I0123 17:37:19.958995 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerStarted","Data":"dcc0baf944edbf2c9f416ee599c48861aafc7a6f3a359532feffcba881dd2fa8"} Jan 23 17:37:20 crc kubenswrapper[4606]: I0123 17:37:20.302382 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 17:37:20 crc kubenswrapper[4606]: I0123 17:37:20.366641 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:37:20 crc kubenswrapper[4606]: E0123 17:37:20.367014 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:37:20 crc kubenswrapper[4606]: I0123 17:37:20.480822 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="684acded-8efb-47ea-8bb4-0212874a6826" containerName="galera" probeResult="failure" output=< Jan 23 17:37:20 crc kubenswrapper[4606]: wsrep_local_state_comment (Joined) differs from Synced Jan 23 17:37:20 crc kubenswrapper[4606]: > Jan 23 17:37:20 crc kubenswrapper[4606]: I0123 17:37:20.970117 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerStarted","Data":"4cf67cb9e7fced0e43cdb2fd4ad77944d9cf2f7d4c447553c680e7560410d7a5"} Jan 23 17:37:21 crc kubenswrapper[4606]: I0123 17:37:21.315054 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:37:22 crc kubenswrapper[4606]: I0123 17:37:22.996661 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerStarted","Data":"f3fcd26de5e9bcd329a3c89dd739cf596bc270a291120bed58b282159a653c99"} Jan 23 17:37:22 crc kubenswrapper[4606]: I0123 17:37:22.998364 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:37:23 crc kubenswrapper[4606]: I0123 17:37:23.023456 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.501210513 podStartE2EDuration="10.02343697s" podCreationTimestamp="2026-01-23 17:37:13 +0000 UTC" firstStartedPulling="2026-01-23 17:37:15.94219619 +0000 UTC m=+2669.942388596" lastFinishedPulling="2026-01-23 17:37:22.464422647 +0000 UTC m=+2676.464615053" observedRunningTime="2026-01-23 17:37:23.020002026 +0000 UTC m=+2677.020194432" watchObservedRunningTime="2026-01-23 17:37:23.02343697 +0000 UTC m=+2677.023629396" Jan 23 17:37:26 crc kubenswrapper[4606]: I0123 17:37:26.305277 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a43fb635-59de-4576-bdf4-9eacebc80a85" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:37:29 crc kubenswrapper[4606]: I0123 17:37:29.008256 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 17:37:33 crc kubenswrapper[4606]: I0123 17:37:31.631492 4606 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-whw2p container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:37:33 crc kubenswrapper[4606]: I0123 17:37:31.632264 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-whw2p" podUID="9e9e2c8c-5d03-48b0-ab9c-110eeef212b5" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 17:37:33 crc kubenswrapper[4606]: I0123 17:37:33.491272 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 17:37:34 crc kubenswrapper[4606]: I0123 17:37:34.365778 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:37:34 crc kubenswrapper[4606]: E0123 17:37:34.366102 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:37:34 crc kubenswrapper[4606]: I0123 17:37:34.846389 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:34 crc kubenswrapper[4606]: I0123 17:37:34.846751 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-central-agent" containerID="cri-o://0f34e81ea8b6fc3502da9f9f548493ed4c0a6442158e4edfe9351386777d4bc5" gracePeriod=30 Jan 23 17:37:34 crc kubenswrapper[4606]: I0123 17:37:34.846858 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="sg-core" containerID="cri-o://4cf67cb9e7fced0e43cdb2fd4ad77944d9cf2f7d4c447553c680e7560410d7a5" gracePeriod=30 Jan 23 17:37:34 crc kubenswrapper[4606]: I0123 17:37:34.846893 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-notification-agent" containerID="cri-o://dcc0baf944edbf2c9f416ee599c48861aafc7a6f3a359532feffcba881dd2fa8" gracePeriod=30 Jan 23 17:37:34 crc kubenswrapper[4606]: I0123 17:37:34.847003 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="proxy-httpd" containerID="cri-o://f3fcd26de5e9bcd329a3c89dd739cf596bc270a291120bed58b282159a653c99" gracePeriod=30 Jan 23 17:37:34 crc kubenswrapper[4606]: I0123 17:37:34.866965 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 17:37:36 crc kubenswrapper[4606]: I0123 17:37:36.132604 4606 generic.go:334] "Generic (PLEG): container finished" podID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerID="f3fcd26de5e9bcd329a3c89dd739cf596bc270a291120bed58b282159a653c99" exitCode=0 Jan 23 17:37:36 crc kubenswrapper[4606]: I0123 17:37:36.132869 4606 generic.go:334] "Generic (PLEG): container finished" podID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerID="4cf67cb9e7fced0e43cdb2fd4ad77944d9cf2f7d4c447553c680e7560410d7a5" exitCode=2 Jan 23 17:37:36 crc kubenswrapper[4606]: I0123 17:37:36.132882 4606 generic.go:334] "Generic (PLEG): container finished" podID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerID="0f34e81ea8b6fc3502da9f9f548493ed4c0a6442158e4edfe9351386777d4bc5" exitCode=0 Jan 23 17:37:36 crc kubenswrapper[4606]: I0123 17:37:36.132650 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerDied","Data":"f3fcd26de5e9bcd329a3c89dd739cf596bc270a291120bed58b282159a653c99"} Jan 23 17:37:36 crc kubenswrapper[4606]: I0123 17:37:36.132918 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerDied","Data":"4cf67cb9e7fced0e43cdb2fd4ad77944d9cf2f7d4c447553c680e7560410d7a5"} Jan 23 17:37:36 crc kubenswrapper[4606]: I0123 17:37:36.132932 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerDied","Data":"0f34e81ea8b6fc3502da9f9f548493ed4c0a6442158e4edfe9351386777d4bc5"} Jan 23 17:37:37 crc kubenswrapper[4606]: I0123 17:37:37.145692 4606 generic.go:334] "Generic (PLEG): container finished" podID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerID="dcc0baf944edbf2c9f416ee599c48861aafc7a6f3a359532feffcba881dd2fa8" exitCode=0 Jan 23 17:37:37 crc kubenswrapper[4606]: I0123 17:37:37.145862 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerDied","Data":"dcc0baf944edbf2c9f416ee599c48861aafc7a6f3a359532feffcba881dd2fa8"} Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.313338 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439126 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-log-httpd\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439178 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-ceilometer-tls-certs\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439252 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzj4w\" (UniqueName: \"kubernetes.io/projected/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-kube-api-access-gzj4w\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439342 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-sg-core-conf-yaml\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439413 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-combined-ca-bundle\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439441 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-config-data\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439496 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-run-httpd\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.439900 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.440112 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-scripts\") pod \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\" (UID: \"c2530c45-7c9e-4d61-b5bc-6c02d7f40693\") " Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.440209 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.440639 4606 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.440656 4606 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.449644 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-scripts" (OuterVolumeSpecName: "scripts") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.453759 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-kube-api-access-gzj4w" (OuterVolumeSpecName: "kube-api-access-gzj4w") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "kube-api-access-gzj4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.521690 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.545463 4606 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.546199 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzj4w\" (UniqueName: \"kubernetes.io/projected/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-kube-api-access-gzj4w\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.546355 4606 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.605987 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.621873 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.649853 4606 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.649884 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.652232 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-config-data" (OuterVolumeSpecName: "config-data") pod "c2530c45-7c9e-4d61-b5bc-6c02d7f40693" (UID: "c2530c45-7c9e-4d61-b5bc-6c02d7f40693"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:37:38 crc kubenswrapper[4606]: I0123 17:37:38.751226 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2530c45-7c9e-4d61-b5bc-6c02d7f40693-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.186627 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2530c45-7c9e-4d61-b5bc-6c02d7f40693","Type":"ContainerDied","Data":"fa09385d3f138455c088fc3d1d1ac6ecd92f5a06112d45dc6460edb8cbc20c5b"} Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.186709 4606 scope.go:117] "RemoveContainer" containerID="f3fcd26de5e9bcd329a3c89dd739cf596bc270a291120bed58b282159a653c99" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.186723 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.207419 4606 scope.go:117] "RemoveContainer" containerID="4cf67cb9e7fced0e43cdb2fd4ad77944d9cf2f7d4c447553c680e7560410d7a5" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.255362 4606 scope.go:117] "RemoveContainer" containerID="dcc0baf944edbf2c9f416ee599c48861aafc7a6f3a359532feffcba881dd2fa8" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.256104 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.275548 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.305073 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:39 crc kubenswrapper[4606]: E0123 17:37:39.305851 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="sg-core" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.305985 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="sg-core" Jan 23 17:37:39 crc kubenswrapper[4606]: E0123 17:37:39.306069 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-central-agent" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.306135 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-central-agent" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.307028 4606 scope.go:117] "RemoveContainer" containerID="0f34e81ea8b6fc3502da9f9f548493ed4c0a6442158e4edfe9351386777d4bc5" Jan 23 17:37:39 crc kubenswrapper[4606]: E0123 17:37:39.322108 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-notification-agent" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.322155 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-notification-agent" Jan 23 17:37:39 crc kubenswrapper[4606]: E0123 17:37:39.322183 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="proxy-httpd" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.322191 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="proxy-httpd" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.322625 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-notification-agent" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.322646 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="ceilometer-central-agent" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.322662 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="sg-core" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.322680 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" containerName="proxy-httpd" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.325542 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.325656 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.329513 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.330272 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.332001 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365068 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365107 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-scripts\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365143 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365192 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-config-data\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365234 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c203fb6-d209-47b4-814f-a98707421288-log-httpd\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365258 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365314 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfckv\" (UniqueName: \"kubernetes.io/projected/7c203fb6-d209-47b4-814f-a98707421288-kube-api-access-mfckv\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.365357 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c203fb6-d209-47b4-814f-a98707421288-run-httpd\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.466796 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c203fb6-d209-47b4-814f-a98707421288-run-httpd\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.466853 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.466875 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-scripts\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.466916 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.466987 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-config-data\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.467049 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c203fb6-d209-47b4-814f-a98707421288-log-httpd\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.467098 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.467203 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfckv\" (UniqueName: \"kubernetes.io/projected/7c203fb6-d209-47b4-814f-a98707421288-kube-api-access-mfckv\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.467962 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c203fb6-d209-47b4-814f-a98707421288-run-httpd\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.470974 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c203fb6-d209-47b4-814f-a98707421288-log-httpd\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.473457 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.473919 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.475702 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-scripts\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.476438 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.477411 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c203fb6-d209-47b4-814f-a98707421288-config-data\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.498426 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfckv\" (UniqueName: \"kubernetes.io/projected/7c203fb6-d209-47b4-814f-a98707421288-kube-api-access-mfckv\") pod \"ceilometer-0\" (UID: \"7c203fb6-d209-47b4-814f-a98707421288\") " pod="openstack/ceilometer-0" Jan 23 17:37:39 crc kubenswrapper[4606]: I0123 17:37:39.648794 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 17:37:40 crc kubenswrapper[4606]: W0123 17:37:40.247315 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c203fb6_d209_47b4_814f_a98707421288.slice/crio-6b2190b1b530aead96b96e04e51bb555331262da45fedc885412cbdec74aa107 WatchSource:0}: Error finding container 6b2190b1b530aead96b96e04e51bb555331262da45fedc885412cbdec74aa107: Status 404 returned error can't find the container with id 6b2190b1b530aead96b96e04e51bb555331262da45fedc885412cbdec74aa107 Jan 23 17:37:40 crc kubenswrapper[4606]: I0123 17:37:40.248954 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 17:37:40 crc kubenswrapper[4606]: I0123 17:37:40.376973 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2530c45-7c9e-4d61-b5bc-6c02d7f40693" path="/var/lib/kubelet/pods/c2530c45-7c9e-4d61-b5bc-6c02d7f40693/volumes" Jan 23 17:37:41 crc kubenswrapper[4606]: I0123 17:37:41.210774 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c203fb6-d209-47b4-814f-a98707421288","Type":"ContainerStarted","Data":"6b2190b1b530aead96b96e04e51bb555331262da45fedc885412cbdec74aa107"} Jan 23 17:37:42 crc kubenswrapper[4606]: I0123 17:37:42.220369 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c203fb6-d209-47b4-814f-a98707421288","Type":"ContainerStarted","Data":"87cdca0cd2cbe7b9506d583a779602fb13a425ab7b2e1d8f1b8dc09097f3883e"} Jan 23 17:37:45 crc kubenswrapper[4606]: I0123 17:37:45.248213 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c203fb6-d209-47b4-814f-a98707421288","Type":"ContainerStarted","Data":"edb2f92ee65d249a4312e5ab395f781a9169c61731c5515d1772e61481748e2e"} Jan 23 17:37:47 crc kubenswrapper[4606]: I0123 17:37:47.266258 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c203fb6-d209-47b4-814f-a98707421288","Type":"ContainerStarted","Data":"7d30a164b626a0fe68c781d538f26978b45a8c864566519e08c2afe4a8b3a0a9"} Jan 23 17:37:47 crc kubenswrapper[4606]: I0123 17:37:47.366334 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:37:47 crc kubenswrapper[4606]: E0123 17:37:47.366928 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:37:49 crc kubenswrapper[4606]: I0123 17:37:49.285370 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c203fb6-d209-47b4-814f-a98707421288","Type":"ContainerStarted","Data":"e07a80982195d043db9390253a8cf112acaaacb8087d90a00bd08384de78848f"} Jan 23 17:37:49 crc kubenswrapper[4606]: I0123 17:37:49.285960 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 17:37:49 crc kubenswrapper[4606]: I0123 17:37:49.316342 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.875376693 podStartE2EDuration="10.316325773s" podCreationTimestamp="2026-01-23 17:37:39 +0000 UTC" firstStartedPulling="2026-01-23 17:37:40.250229903 +0000 UTC m=+2694.250422309" lastFinishedPulling="2026-01-23 17:37:48.691178983 +0000 UTC m=+2702.691371389" observedRunningTime="2026-01-23 17:37:49.310547772 +0000 UTC m=+2703.310740178" watchObservedRunningTime="2026-01-23 17:37:49.316325773 +0000 UTC m=+2703.316518179" Jan 23 17:37:58 crc kubenswrapper[4606]: I0123 17:37:58.367033 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:37:58 crc kubenswrapper[4606]: E0123 17:37:58.367878 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:38:09 crc kubenswrapper[4606]: I0123 17:38:09.664906 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 17:38:10 crc kubenswrapper[4606]: I0123 17:38:10.366355 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:38:11 crc kubenswrapper[4606]: I0123 17:38:11.522061 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"8dd2abe5e8a6062d16f75c66e57e9b0cfc39229f4a227f5ebb2106bde706ceb3"} Jan 23 17:38:33 crc kubenswrapper[4606]: I0123 17:38:33.488690 4606 scope.go:117] "RemoveContainer" containerID="5b1e10b6cafdc65c98e14b6b1f90af136e21a80eda72b14df17d6aeb7ebfca66" Jan 23 17:38:33 crc kubenswrapper[4606]: I0123 17:38:33.527771 4606 scope.go:117] "RemoveContainer" containerID="92f6266b5f69c6cc004f6c38b1dc21316d744ae557fc4eb62318a462571a5cc8" Jan 23 17:38:33 crc kubenswrapper[4606]: I0123 17:38:33.595027 4606 scope.go:117] "RemoveContainer" containerID="0e47d9a8cd3e40cf1cf9c8af73fe2996517f6c80a303f86745cf140bd618a1a5" Jan 23 17:40:00 crc kubenswrapper[4606]: I0123 17:40:00.576970 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sf5dg" podUID="23ed80ea-dfab-4552-86d1-972563d8d4b6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:40:01 crc kubenswrapper[4606]: I0123 17:40:01.479562 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-n5jq4" podUID="d78dc70d-59e8-4235-b345-f523b414fb31" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 17:40:29 crc kubenswrapper[4606]: I0123 17:40:29.933934 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:40:29 crc kubenswrapper[4606]: I0123 17:40:29.934578 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:40:50 crc kubenswrapper[4606]: I0123 17:40:50.329481 4606 generic.go:334] "Generic (PLEG): container finished" podID="56db46d4-844a-44eb-b09e-4512ce955b3f" containerID="081e1c59e3fbdbe4756ade1921ae676618ef6e4c7812a809ebef1f5efecc9d12" exitCode=0 Jan 23 17:40:50 crc kubenswrapper[4606]: I0123 17:40:50.329558 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" event={"ID":"56db46d4-844a-44eb-b09e-4512ce955b3f","Type":"ContainerDied","Data":"081e1c59e3fbdbe4756ade1921ae676618ef6e4c7812a809ebef1f5efecc9d12"} Jan 23 17:40:51 crc kubenswrapper[4606]: I0123 17:40:51.883854 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.046863 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-1\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047017 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-0\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047042 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-ssh-key-openstack-edpm-ipam\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047061 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-inventory\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047101 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-combined-ca-bundle\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047118 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-extra-config-0\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047170 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-0\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047209 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfvqd\" (UniqueName: \"kubernetes.io/projected/56db46d4-844a-44eb-b09e-4512ce955b3f-kube-api-access-kfvqd\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.047252 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-1\") pod \"56db46d4-844a-44eb-b09e-4512ce955b3f\" (UID: \"56db46d4-844a-44eb-b09e-4512ce955b3f\") " Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.070510 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56db46d4-844a-44eb-b09e-4512ce955b3f-kube-api-access-kfvqd" (OuterVolumeSpecName: "kube-api-access-kfvqd") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "kube-api-access-kfvqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.074340 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.079346 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.080506 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.083568 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.084468 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.085051 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.086579 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-inventory" (OuterVolumeSpecName: "inventory") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.109395 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "56db46d4-844a-44eb-b09e-4512ce955b3f" (UID: "56db46d4-844a-44eb-b09e-4512ce955b3f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150010 4606 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150054 4606 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150067 4606 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150078 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150089 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150101 4606 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150115 4606 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150125 4606 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/56db46d4-844a-44eb-b09e-4512ce955b3f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.150135 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfvqd\" (UniqueName: \"kubernetes.io/projected/56db46d4-844a-44eb-b09e-4512ce955b3f-kube-api-access-kfvqd\") on node \"crc\" DevicePath \"\"" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.350003 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" event={"ID":"56db46d4-844a-44eb-b09e-4512ce955b3f","Type":"ContainerDied","Data":"9cbf4991dcb129b4183b4f32f85d11dabbd8c142bf2c337a967fa67e7384d221"} Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.350058 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cbf4991dcb129b4183b4f32f85d11dabbd8c142bf2c337a967fa67e7384d221" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.350206 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-67r74" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.483391 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982"] Jan 23 17:40:52 crc kubenswrapper[4606]: E0123 17:40:52.483809 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56db46d4-844a-44eb-b09e-4512ce955b3f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.483827 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="56db46d4-844a-44eb-b09e-4512ce955b3f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.483990 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="56db46d4-844a-44eb-b09e-4512ce955b3f" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.484825 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.487707 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.487707 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.488120 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.488221 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8tp68" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.502859 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.502966 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982"] Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.557847 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.558162 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.558292 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg58f\" (UniqueName: \"kubernetes.io/projected/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-kube-api-access-vg58f\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.558550 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.558616 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.558722 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.558811 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.661009 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.661289 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.661467 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.661674 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg58f\" (UniqueName: \"kubernetes.io/projected/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-kube-api-access-vg58f\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.662188 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.662846 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.663320 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.666977 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.667085 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.667560 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.669488 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.669698 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.671137 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.685056 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg58f\" (UniqueName: \"kubernetes.io/projected/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-kube-api-access-vg58f\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xv982\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:52 crc kubenswrapper[4606]: I0123 17:40:52.800413 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:40:53 crc kubenswrapper[4606]: I0123 17:40:53.422302 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982"] Jan 23 17:40:54 crc kubenswrapper[4606]: I0123 17:40:54.381513 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" event={"ID":"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9","Type":"ContainerStarted","Data":"59ca6268c614de5f441a3309a6f66f80d9f5f8276fcfbbad4de50bcd107c0980"} Jan 23 17:40:54 crc kubenswrapper[4606]: I0123 17:40:54.381845 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" event={"ID":"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9","Type":"ContainerStarted","Data":"b58bc376ceeb7f7e97ca5db48cbd2c5e5e625f43343e87e845e56ac4fc00b672"} Jan 23 17:40:54 crc kubenswrapper[4606]: I0123 17:40:54.404259 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" podStartSLOduration=1.709044515 podStartE2EDuration="2.404242285s" podCreationTimestamp="2026-01-23 17:40:52 +0000 UTC" firstStartedPulling="2026-01-23 17:40:53.430511537 +0000 UTC m=+2887.430703943" lastFinishedPulling="2026-01-23 17:40:54.125709307 +0000 UTC m=+2888.125901713" observedRunningTime="2026-01-23 17:40:54.396122337 +0000 UTC m=+2888.396314743" watchObservedRunningTime="2026-01-23 17:40:54.404242285 +0000 UTC m=+2888.404434681" Jan 23 17:40:59 crc kubenswrapper[4606]: I0123 17:40:59.934047 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:40:59 crc kubenswrapper[4606]: I0123 17:40:59.934665 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.651507 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q4nfj"] Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.653888 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.679457 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q4nfj"] Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.801518 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgrlr\" (UniqueName: \"kubernetes.io/projected/a94c709c-2642-4c7d-bde0-48d2b8df6b60-kube-api-access-jgrlr\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.801647 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-catalog-content\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.801691 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-utilities\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.903813 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgrlr\" (UniqueName: \"kubernetes.io/projected/a94c709c-2642-4c7d-bde0-48d2b8df6b60-kube-api-access-jgrlr\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.903909 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-catalog-content\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.903956 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-utilities\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.904419 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-utilities\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.904883 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-catalog-content\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.923957 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgrlr\" (UniqueName: \"kubernetes.io/projected/a94c709c-2642-4c7d-bde0-48d2b8df6b60-kube-api-access-jgrlr\") pod \"community-operators-q4nfj\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:07 crc kubenswrapper[4606]: I0123 17:41:07.989876 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:08 crc kubenswrapper[4606]: I0123 17:41:08.517541 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q4nfj"] Jan 23 17:41:09 crc kubenswrapper[4606]: I0123 17:41:09.530390 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4nfj" event={"ID":"a94c709c-2642-4c7d-bde0-48d2b8df6b60","Type":"ContainerDied","Data":"86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8"} Jan 23 17:41:09 crc kubenswrapper[4606]: I0123 17:41:09.530236 4606 generic.go:334] "Generic (PLEG): container finished" podID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerID="86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8" exitCode=0 Jan 23 17:41:09 crc kubenswrapper[4606]: I0123 17:41:09.531438 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4nfj" event={"ID":"a94c709c-2642-4c7d-bde0-48d2b8df6b60","Type":"ContainerStarted","Data":"0c2c6913ac15700c772bca8ece4d41c5117b57183bc4c03ae267b1727d1c9545"} Jan 23 17:41:10 crc kubenswrapper[4606]: I0123 17:41:10.542731 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4nfj" event={"ID":"a94c709c-2642-4c7d-bde0-48d2b8df6b60","Type":"ContainerStarted","Data":"80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5"} Jan 23 17:41:12 crc kubenswrapper[4606]: I0123 17:41:12.563320 4606 generic.go:334] "Generic (PLEG): container finished" podID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerID="80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5" exitCode=0 Jan 23 17:41:12 crc kubenswrapper[4606]: I0123 17:41:12.563374 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4nfj" event={"ID":"a94c709c-2642-4c7d-bde0-48d2b8df6b60","Type":"ContainerDied","Data":"80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5"} Jan 23 17:41:13 crc kubenswrapper[4606]: I0123 17:41:13.579067 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4nfj" event={"ID":"a94c709c-2642-4c7d-bde0-48d2b8df6b60","Type":"ContainerStarted","Data":"e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9"} Jan 23 17:41:13 crc kubenswrapper[4606]: I0123 17:41:13.602628 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q4nfj" podStartSLOduration=3.178385154 podStartE2EDuration="6.602610034s" podCreationTimestamp="2026-01-23 17:41:07 +0000 UTC" firstStartedPulling="2026-01-23 17:41:09.532652392 +0000 UTC m=+2903.532844828" lastFinishedPulling="2026-01-23 17:41:12.956877302 +0000 UTC m=+2906.957069708" observedRunningTime="2026-01-23 17:41:13.595002499 +0000 UTC m=+2907.595194905" watchObservedRunningTime="2026-01-23 17:41:13.602610034 +0000 UTC m=+2907.602802440" Jan 23 17:41:17 crc kubenswrapper[4606]: I0123 17:41:17.991040 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:17 crc kubenswrapper[4606]: I0123 17:41:17.993112 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:18 crc kubenswrapper[4606]: I0123 17:41:18.053109 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:18 crc kubenswrapper[4606]: I0123 17:41:18.706646 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:18 crc kubenswrapper[4606]: I0123 17:41:18.758534 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q4nfj"] Jan 23 17:41:20 crc kubenswrapper[4606]: I0123 17:41:20.661807 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q4nfj" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="registry-server" containerID="cri-o://e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9" gracePeriod=2 Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.133753 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.286201 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-utilities\") pod \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.286269 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgrlr\" (UniqueName: \"kubernetes.io/projected/a94c709c-2642-4c7d-bde0-48d2b8df6b60-kube-api-access-jgrlr\") pod \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.286300 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-catalog-content\") pod \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\" (UID: \"a94c709c-2642-4c7d-bde0-48d2b8df6b60\") " Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.287107 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-utilities" (OuterVolumeSpecName: "utilities") pod "a94c709c-2642-4c7d-bde0-48d2b8df6b60" (UID: "a94c709c-2642-4c7d-bde0-48d2b8df6b60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.297738 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94c709c-2642-4c7d-bde0-48d2b8df6b60-kube-api-access-jgrlr" (OuterVolumeSpecName: "kube-api-access-jgrlr") pod "a94c709c-2642-4c7d-bde0-48d2b8df6b60" (UID: "a94c709c-2642-4c7d-bde0-48d2b8df6b60"). InnerVolumeSpecName "kube-api-access-jgrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.356488 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a94c709c-2642-4c7d-bde0-48d2b8df6b60" (UID: "a94c709c-2642-4c7d-bde0-48d2b8df6b60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.388506 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.388536 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgrlr\" (UniqueName: \"kubernetes.io/projected/a94c709c-2642-4c7d-bde0-48d2b8df6b60-kube-api-access-jgrlr\") on node \"crc\" DevicePath \"\"" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.388547 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94c709c-2642-4c7d-bde0-48d2b8df6b60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.674951 4606 generic.go:334] "Generic (PLEG): container finished" podID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerID="e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9" exitCode=0 Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.675001 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4nfj" event={"ID":"a94c709c-2642-4c7d-bde0-48d2b8df6b60","Type":"ContainerDied","Data":"e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9"} Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.675047 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q4nfj" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.675312 4606 scope.go:117] "RemoveContainer" containerID="e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.675292 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q4nfj" event={"ID":"a94c709c-2642-4c7d-bde0-48d2b8df6b60","Type":"ContainerDied","Data":"0c2c6913ac15700c772bca8ece4d41c5117b57183bc4c03ae267b1727d1c9545"} Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.699037 4606 scope.go:117] "RemoveContainer" containerID="80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.728082 4606 scope.go:117] "RemoveContainer" containerID="86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.781457 4606 scope.go:117] "RemoveContainer" containerID="e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9" Jan 23 17:41:21 crc kubenswrapper[4606]: E0123 17:41:21.784357 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9\": container with ID starting with e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9 not found: ID does not exist" containerID="e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.784389 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9"} err="failed to get container status \"e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9\": rpc error: code = NotFound desc = could not find container \"e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9\": container with ID starting with e000b4970eb3c54e5d00e3e40649adc7ca69dadbfbd7d6bf80908c8f859e67a9 not found: ID does not exist" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.784410 4606 scope.go:117] "RemoveContainer" containerID="80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5" Jan 23 17:41:21 crc kubenswrapper[4606]: E0123 17:41:21.787350 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5\": container with ID starting with 80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5 not found: ID does not exist" containerID="80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.787393 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5"} err="failed to get container status \"80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5\": rpc error: code = NotFound desc = could not find container \"80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5\": container with ID starting with 80ecac68796289b3257db40d438c2f2ffe4b99511ff2835e67219bc956629dd5 not found: ID does not exist" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.787412 4606 scope.go:117] "RemoveContainer" containerID="86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.794297 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q4nfj"] Jan 23 17:41:21 crc kubenswrapper[4606]: E0123 17:41:21.796757 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8\": container with ID starting with 86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8 not found: ID does not exist" containerID="86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.796790 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8"} err="failed to get container status \"86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8\": rpc error: code = NotFound desc = could not find container \"86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8\": container with ID starting with 86bac494550bd81d06aea6e299e5f8836ca169d14e4cd842ca737902f9bbb3a8 not found: ID does not exist" Jan 23 17:41:21 crc kubenswrapper[4606]: I0123 17:41:21.810049 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q4nfj"] Jan 23 17:41:22 crc kubenswrapper[4606]: I0123 17:41:22.383073 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" path="/var/lib/kubelet/pods/a94c709c-2642-4c7d-bde0-48d2b8df6b60/volumes" Jan 23 17:41:29 crc kubenswrapper[4606]: I0123 17:41:29.934261 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:41:29 crc kubenswrapper[4606]: I0123 17:41:29.934899 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:41:29 crc kubenswrapper[4606]: I0123 17:41:29.934961 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:41:29 crc kubenswrapper[4606]: I0123 17:41:29.935748 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8dd2abe5e8a6062d16f75c66e57e9b0cfc39229f4a227f5ebb2106bde706ceb3"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:41:29 crc kubenswrapper[4606]: I0123 17:41:29.935815 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://8dd2abe5e8a6062d16f75c66e57e9b0cfc39229f4a227f5ebb2106bde706ceb3" gracePeriod=600 Jan 23 17:41:30 crc kubenswrapper[4606]: I0123 17:41:30.773478 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="8dd2abe5e8a6062d16f75c66e57e9b0cfc39229f4a227f5ebb2106bde706ceb3" exitCode=0 Jan 23 17:41:30 crc kubenswrapper[4606]: I0123 17:41:30.773553 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"8dd2abe5e8a6062d16f75c66e57e9b0cfc39229f4a227f5ebb2106bde706ceb3"} Jan 23 17:41:30 crc kubenswrapper[4606]: I0123 17:41:30.773875 4606 scope.go:117] "RemoveContainer" containerID="ae71f5feb61ac0fdc01d5557db5e04028246a8e932ab5e9739ba55dfcdd38258" Jan 23 17:41:31 crc kubenswrapper[4606]: I0123 17:41:31.789817 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85"} Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.151332 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jsrdr"] Jan 23 17:41:33 crc kubenswrapper[4606]: E0123 17:41:33.152167 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="registry-server" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.152186 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="registry-server" Jan 23 17:41:33 crc kubenswrapper[4606]: E0123 17:41:33.152233 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="extract-utilities" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.152241 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="extract-utilities" Jan 23 17:41:33 crc kubenswrapper[4606]: E0123 17:41:33.152261 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="extract-content" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.152271 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="extract-content" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.152484 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94c709c-2642-4c7d-bde0-48d2b8df6b60" containerName="registry-server" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.154439 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.163289 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsrdr"] Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.245467 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-utilities\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.245554 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-catalog-content\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.245621 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfntq\" (UniqueName: \"kubernetes.io/projected/ba8cd453-bdbf-487e-b2b6-be550d6150c5-kube-api-access-nfntq\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.347970 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfntq\" (UniqueName: \"kubernetes.io/projected/ba8cd453-bdbf-487e-b2b6-be550d6150c5-kube-api-access-nfntq\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.348219 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-utilities\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.348295 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-catalog-content\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.348707 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-catalog-content\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.348755 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-utilities\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.384451 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfntq\" (UniqueName: \"kubernetes.io/projected/ba8cd453-bdbf-487e-b2b6-be550d6150c5-kube-api-access-nfntq\") pod \"redhat-operators-jsrdr\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.484531 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:33 crc kubenswrapper[4606]: I0123 17:41:33.922662 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsrdr"] Jan 23 17:41:33 crc kubenswrapper[4606]: W0123 17:41:33.926556 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba8cd453_bdbf_487e_b2b6_be550d6150c5.slice/crio-fd3cf405921b6b8b13a0f4960f2aa854f426e6f130552d11d50d1fbcbac52b98 WatchSource:0}: Error finding container fd3cf405921b6b8b13a0f4960f2aa854f426e6f130552d11d50d1fbcbac52b98: Status 404 returned error can't find the container with id fd3cf405921b6b8b13a0f4960f2aa854f426e6f130552d11d50d1fbcbac52b98 Jan 23 17:41:34 crc kubenswrapper[4606]: I0123 17:41:34.838245 4606 generic.go:334] "Generic (PLEG): container finished" podID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerID="1685b8624a8d11d248d987a90270f5a9f7733a20f9847d81950d5651013181c3" exitCode=0 Jan 23 17:41:34 crc kubenswrapper[4606]: I0123 17:41:34.838332 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsrdr" event={"ID":"ba8cd453-bdbf-487e-b2b6-be550d6150c5","Type":"ContainerDied","Data":"1685b8624a8d11d248d987a90270f5a9f7733a20f9847d81950d5651013181c3"} Jan 23 17:41:34 crc kubenswrapper[4606]: I0123 17:41:34.838572 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsrdr" event={"ID":"ba8cd453-bdbf-487e-b2b6-be550d6150c5","Type":"ContainerStarted","Data":"fd3cf405921b6b8b13a0f4960f2aa854f426e6f130552d11d50d1fbcbac52b98"} Jan 23 17:41:34 crc kubenswrapper[4606]: I0123 17:41:34.842016 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:41:35 crc kubenswrapper[4606]: I0123 17:41:35.847809 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsrdr" event={"ID":"ba8cd453-bdbf-487e-b2b6-be550d6150c5","Type":"ContainerStarted","Data":"7313c2c73abb9487ca2a8f18660a96e0852553e4821aefc8969edce526e7673e"} Jan 23 17:41:41 crc kubenswrapper[4606]: I0123 17:41:41.901125 4606 generic.go:334] "Generic (PLEG): container finished" podID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerID="7313c2c73abb9487ca2a8f18660a96e0852553e4821aefc8969edce526e7673e" exitCode=0 Jan 23 17:41:41 crc kubenswrapper[4606]: I0123 17:41:41.901227 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsrdr" event={"ID":"ba8cd453-bdbf-487e-b2b6-be550d6150c5","Type":"ContainerDied","Data":"7313c2c73abb9487ca2a8f18660a96e0852553e4821aefc8969edce526e7673e"} Jan 23 17:41:43 crc kubenswrapper[4606]: I0123 17:41:43.937336 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsrdr" event={"ID":"ba8cd453-bdbf-487e-b2b6-be550d6150c5","Type":"ContainerStarted","Data":"3ba3b631c614191068a9e89682c8e90c21c02228b55ccd4e3cd1c9153e200a59"} Jan 23 17:41:44 crc kubenswrapper[4606]: I0123 17:41:44.985039 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jsrdr" podStartSLOduration=3.599400955 podStartE2EDuration="11.985015637s" podCreationTimestamp="2026-01-23 17:41:33 +0000 UTC" firstStartedPulling="2026-01-23 17:41:34.841736174 +0000 UTC m=+2928.841928580" lastFinishedPulling="2026-01-23 17:41:43.227350846 +0000 UTC m=+2937.227543262" observedRunningTime="2026-01-23 17:41:44.973166437 +0000 UTC m=+2938.973358863" watchObservedRunningTime="2026-01-23 17:41:44.985015637 +0000 UTC m=+2938.985208063" Jan 23 17:41:53 crc kubenswrapper[4606]: I0123 17:41:53.485476 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:53 crc kubenswrapper[4606]: I0123 17:41:53.485924 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:41:54 crc kubenswrapper[4606]: I0123 17:41:54.531947 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jsrdr" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="registry-server" probeResult="failure" output=< Jan 23 17:41:54 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 17:41:54 crc kubenswrapper[4606]: > Jan 23 17:42:03 crc kubenswrapper[4606]: I0123 17:42:03.537310 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:42:03 crc kubenswrapper[4606]: I0123 17:42:03.595832 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:42:04 crc kubenswrapper[4606]: I0123 17:42:04.360253 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsrdr"] Jan 23 17:42:05 crc kubenswrapper[4606]: I0123 17:42:05.138834 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jsrdr" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="registry-server" containerID="cri-o://3ba3b631c614191068a9e89682c8e90c21c02228b55ccd4e3cd1c9153e200a59" gracePeriod=2 Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.152640 4606 generic.go:334] "Generic (PLEG): container finished" podID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerID="3ba3b631c614191068a9e89682c8e90c21c02228b55ccd4e3cd1c9153e200a59" exitCode=0 Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.152693 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsrdr" event={"ID":"ba8cd453-bdbf-487e-b2b6-be550d6150c5","Type":"ContainerDied","Data":"3ba3b631c614191068a9e89682c8e90c21c02228b55ccd4e3cd1c9153e200a59"} Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.152993 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsrdr" event={"ID":"ba8cd453-bdbf-487e-b2b6-be550d6150c5","Type":"ContainerDied","Data":"fd3cf405921b6b8b13a0f4960f2aa854f426e6f130552d11d50d1fbcbac52b98"} Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.153008 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd3cf405921b6b8b13a0f4960f2aa854f426e6f130552d11d50d1fbcbac52b98" Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.162237 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.342437 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfntq\" (UniqueName: \"kubernetes.io/projected/ba8cd453-bdbf-487e-b2b6-be550d6150c5-kube-api-access-nfntq\") pod \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.342997 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-utilities\") pod \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.343092 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-catalog-content\") pod \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\" (UID: \"ba8cd453-bdbf-487e-b2b6-be550d6150c5\") " Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.344012 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-utilities" (OuterVolumeSpecName: "utilities") pod "ba8cd453-bdbf-487e-b2b6-be550d6150c5" (UID: "ba8cd453-bdbf-487e-b2b6-be550d6150c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.359733 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8cd453-bdbf-487e-b2b6-be550d6150c5-kube-api-access-nfntq" (OuterVolumeSpecName: "kube-api-access-nfntq") pod "ba8cd453-bdbf-487e-b2b6-be550d6150c5" (UID: "ba8cd453-bdbf-487e-b2b6-be550d6150c5"). InnerVolumeSpecName "kube-api-access-nfntq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.452551 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfntq\" (UniqueName: \"kubernetes.io/projected/ba8cd453-bdbf-487e-b2b6-be550d6150c5-kube-api-access-nfntq\") on node \"crc\" DevicePath \"\"" Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.452784 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.511539 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba8cd453-bdbf-487e-b2b6-be550d6150c5" (UID: "ba8cd453-bdbf-487e-b2b6-be550d6150c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:42:06 crc kubenswrapper[4606]: I0123 17:42:06.555057 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba8cd453-bdbf-487e-b2b6-be550d6150c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:42:07 crc kubenswrapper[4606]: I0123 17:42:07.160932 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsrdr" Jan 23 17:42:07 crc kubenswrapper[4606]: I0123 17:42:07.200480 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsrdr"] Jan 23 17:42:07 crc kubenswrapper[4606]: I0123 17:42:07.209769 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jsrdr"] Jan 23 17:42:08 crc kubenswrapper[4606]: I0123 17:42:08.382098 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" path="/var/lib/kubelet/pods/ba8cd453-bdbf-487e-b2b6-be550d6150c5/volumes" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.910092 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t4v5d"] Jan 23 17:42:57 crc kubenswrapper[4606]: E0123 17:42:57.911218 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="extract-utilities" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.911233 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="extract-utilities" Jan 23 17:42:57 crc kubenswrapper[4606]: E0123 17:42:57.911270 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="registry-server" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.911279 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="registry-server" Jan 23 17:42:57 crc kubenswrapper[4606]: E0123 17:42:57.911317 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="extract-content" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.911326 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="extract-content" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.911669 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8cd453-bdbf-487e-b2b6-be550d6150c5" containerName="registry-server" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.913481 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.934324 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4v5d"] Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.937911 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ghq\" (UniqueName: \"kubernetes.io/projected/580e149b-b1dc-4542-bfda-4b6115557174-kube-api-access-h4ghq\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.938290 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-utilities\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:57 crc kubenswrapper[4606]: I0123 17:42:57.943002 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-catalog-content\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.044785 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ghq\" (UniqueName: \"kubernetes.io/projected/580e149b-b1dc-4542-bfda-4b6115557174-kube-api-access-h4ghq\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.045084 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-utilities\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.045177 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-catalog-content\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.045773 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-catalog-content\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.045847 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-utilities\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.083234 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ghq\" (UniqueName: \"kubernetes.io/projected/580e149b-b1dc-4542-bfda-4b6115557174-kube-api-access-h4ghq\") pod \"certified-operators-t4v5d\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.250377 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:42:58 crc kubenswrapper[4606]: I0123 17:42:58.786048 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4v5d"] Jan 23 17:42:59 crc kubenswrapper[4606]: I0123 17:42:59.037944 4606 generic.go:334] "Generic (PLEG): container finished" podID="580e149b-b1dc-4542-bfda-4b6115557174" containerID="014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c" exitCode=0 Jan 23 17:42:59 crc kubenswrapper[4606]: I0123 17:42:59.038011 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4v5d" event={"ID":"580e149b-b1dc-4542-bfda-4b6115557174","Type":"ContainerDied","Data":"014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c"} Jan 23 17:42:59 crc kubenswrapper[4606]: I0123 17:42:59.038048 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4v5d" event={"ID":"580e149b-b1dc-4542-bfda-4b6115557174","Type":"ContainerStarted","Data":"455db75dddd7f1264b881ee0c566bbb415f51c883377618ed42a447b955fd68b"} Jan 23 17:43:00 crc kubenswrapper[4606]: I0123 17:43:00.047863 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4v5d" event={"ID":"580e149b-b1dc-4542-bfda-4b6115557174","Type":"ContainerStarted","Data":"7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618"} Jan 23 17:43:01 crc kubenswrapper[4606]: I0123 17:43:01.060661 4606 generic.go:334] "Generic (PLEG): container finished" podID="580e149b-b1dc-4542-bfda-4b6115557174" containerID="7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618" exitCode=0 Jan 23 17:43:01 crc kubenswrapper[4606]: I0123 17:43:01.060724 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4v5d" event={"ID":"580e149b-b1dc-4542-bfda-4b6115557174","Type":"ContainerDied","Data":"7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618"} Jan 23 17:43:02 crc kubenswrapper[4606]: I0123 17:43:02.072493 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4v5d" event={"ID":"580e149b-b1dc-4542-bfda-4b6115557174","Type":"ContainerStarted","Data":"42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa"} Jan 23 17:43:08 crc kubenswrapper[4606]: I0123 17:43:08.252126 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:43:08 crc kubenswrapper[4606]: I0123 17:43:08.252799 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:43:08 crc kubenswrapper[4606]: I0123 17:43:08.321249 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:43:08 crc kubenswrapper[4606]: I0123 17:43:08.351366 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t4v5d" podStartSLOduration=8.941516711 podStartE2EDuration="11.351341598s" podCreationTimestamp="2026-01-23 17:42:57 +0000 UTC" firstStartedPulling="2026-01-23 17:42:59.03980356 +0000 UTC m=+3013.039995966" lastFinishedPulling="2026-01-23 17:43:01.449628437 +0000 UTC m=+3015.449820853" observedRunningTime="2026-01-23 17:43:02.09552399 +0000 UTC m=+3016.095716416" watchObservedRunningTime="2026-01-23 17:43:08.351341598 +0000 UTC m=+3022.351534004" Jan 23 17:43:09 crc kubenswrapper[4606]: I0123 17:43:09.182684 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:43:09 crc kubenswrapper[4606]: I0123 17:43:09.249253 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4v5d"] Jan 23 17:43:11 crc kubenswrapper[4606]: I0123 17:43:11.158158 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t4v5d" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="registry-server" containerID="cri-o://42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa" gracePeriod=2 Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.142040 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.173520 4606 generic.go:334] "Generic (PLEG): container finished" podID="580e149b-b1dc-4542-bfda-4b6115557174" containerID="42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa" exitCode=0 Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.173623 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4v5d" event={"ID":"580e149b-b1dc-4542-bfda-4b6115557174","Type":"ContainerDied","Data":"42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa"} Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.173666 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4v5d" event={"ID":"580e149b-b1dc-4542-bfda-4b6115557174","Type":"ContainerDied","Data":"455db75dddd7f1264b881ee0c566bbb415f51c883377618ed42a447b955fd68b"} Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.173701 4606 scope.go:117] "RemoveContainer" containerID="42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.173704 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4v5d" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.200818 4606 scope.go:117] "RemoveContainer" containerID="7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.225900 4606 scope.go:117] "RemoveContainer" containerID="014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.285123 4606 scope.go:117] "RemoveContainer" containerID="42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa" Jan 23 17:43:12 crc kubenswrapper[4606]: E0123 17:43:12.285715 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa\": container with ID starting with 42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa not found: ID does not exist" containerID="42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.285797 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa"} err="failed to get container status \"42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa\": rpc error: code = NotFound desc = could not find container \"42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa\": container with ID starting with 42fb71b7cbf20bf3902a3c4413cfead524597adacd53134190dbf9d03c4e16aa not found: ID does not exist" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.285831 4606 scope.go:117] "RemoveContainer" containerID="7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618" Jan 23 17:43:12 crc kubenswrapper[4606]: E0123 17:43:12.286454 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618\": container with ID starting with 7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618 not found: ID does not exist" containerID="7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.286493 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618"} err="failed to get container status \"7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618\": rpc error: code = NotFound desc = could not find container \"7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618\": container with ID starting with 7c09c607d787e23bd9df16e7b67c45b994760539ff0a4ee5543ef81ef52c8618 not found: ID does not exist" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.286523 4606 scope.go:117] "RemoveContainer" containerID="014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c" Jan 23 17:43:12 crc kubenswrapper[4606]: E0123 17:43:12.287555 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c\": container with ID starting with 014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c not found: ID does not exist" containerID="014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.287611 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c"} err="failed to get container status \"014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c\": rpc error: code = NotFound desc = could not find container \"014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c\": container with ID starting with 014f0ae4d00eff9657e5e8629a68cd1187ef6d07e0a7cf9ae433e37019bdaf6c not found: ID does not exist" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.345319 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-utilities\") pod \"580e149b-b1dc-4542-bfda-4b6115557174\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.345688 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ghq\" (UniqueName: \"kubernetes.io/projected/580e149b-b1dc-4542-bfda-4b6115557174-kube-api-access-h4ghq\") pod \"580e149b-b1dc-4542-bfda-4b6115557174\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.345815 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-catalog-content\") pod \"580e149b-b1dc-4542-bfda-4b6115557174\" (UID: \"580e149b-b1dc-4542-bfda-4b6115557174\") " Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.346313 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-utilities" (OuterVolumeSpecName: "utilities") pod "580e149b-b1dc-4542-bfda-4b6115557174" (UID: "580e149b-b1dc-4542-bfda-4b6115557174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.346852 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.351489 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580e149b-b1dc-4542-bfda-4b6115557174-kube-api-access-h4ghq" (OuterVolumeSpecName: "kube-api-access-h4ghq") pod "580e149b-b1dc-4542-bfda-4b6115557174" (UID: "580e149b-b1dc-4542-bfda-4b6115557174"). InnerVolumeSpecName "kube-api-access-h4ghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.400997 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "580e149b-b1dc-4542-bfda-4b6115557174" (UID: "580e149b-b1dc-4542-bfda-4b6115557174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.449988 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4ghq\" (UniqueName: \"kubernetes.io/projected/580e149b-b1dc-4542-bfda-4b6115557174-kube-api-access-h4ghq\") on node \"crc\" DevicePath \"\"" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.450160 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e149b-b1dc-4542-bfda-4b6115557174-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.531466 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4v5d"] Jan 23 17:43:12 crc kubenswrapper[4606]: I0123 17:43:12.543996 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t4v5d"] Jan 23 17:43:14 crc kubenswrapper[4606]: I0123 17:43:14.380872 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580e149b-b1dc-4542-bfda-4b6115557174" path="/var/lib/kubelet/pods/580e149b-b1dc-4542-bfda-4b6115557174/volumes" Jan 23 17:43:59 crc kubenswrapper[4606]: I0123 17:43:59.948781 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:43:59 crc kubenswrapper[4606]: I0123 17:43:59.949324 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:44:29 crc kubenswrapper[4606]: I0123 17:44:29.933860 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:44:29 crc kubenswrapper[4606]: I0123 17:44:29.934415 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:44:59 crc kubenswrapper[4606]: I0123 17:44:59.934575 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:44:59 crc kubenswrapper[4606]: I0123 17:44:59.935392 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:44:59 crc kubenswrapper[4606]: I0123 17:44:59.935458 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:44:59 crc kubenswrapper[4606]: I0123 17:44:59.936497 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:44:59 crc kubenswrapper[4606]: I0123 17:44:59.936623 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" gracePeriod=600 Jan 23 17:45:00 crc kubenswrapper[4606]: E0123 17:45:00.074366 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.206533 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx"] Jan 23 17:45:00 crc kubenswrapper[4606]: E0123 17:45:00.206984 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="extract-content" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.207002 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="extract-content" Jan 23 17:45:00 crc kubenswrapper[4606]: E0123 17:45:00.207031 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="registry-server" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.207037 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="registry-server" Jan 23 17:45:00 crc kubenswrapper[4606]: E0123 17:45:00.207048 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="extract-utilities" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.207056 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="extract-utilities" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.207280 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="580e149b-b1dc-4542-bfda-4b6115557174" containerName="registry-server" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.208171 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.212372 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.213624 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.239814 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx"] Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.371101 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6bf\" (UniqueName: \"kubernetes.io/projected/12534266-507c-405c-a579-817e94fcadf2-kube-api-access-jg6bf\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.371167 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12534266-507c-405c-a579-817e94fcadf2-secret-volume\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.371294 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12534266-507c-405c-a579-817e94fcadf2-config-volume\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.382240 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" exitCode=0 Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.382316 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85"} Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.382386 4606 scope.go:117] "RemoveContainer" containerID="8dd2abe5e8a6062d16f75c66e57e9b0cfc39229f4a227f5ebb2106bde706ceb3" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.383702 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:45:00 crc kubenswrapper[4606]: E0123 17:45:00.384130 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.472914 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12534266-507c-405c-a579-817e94fcadf2-secret-volume\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.473120 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12534266-507c-405c-a579-817e94fcadf2-config-volume\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.473238 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6bf\" (UniqueName: \"kubernetes.io/projected/12534266-507c-405c-a579-817e94fcadf2-kube-api-access-jg6bf\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.475499 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12534266-507c-405c-a579-817e94fcadf2-config-volume\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.483191 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12534266-507c-405c-a579-817e94fcadf2-secret-volume\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.498214 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6bf\" (UniqueName: \"kubernetes.io/projected/12534266-507c-405c-a579-817e94fcadf2-kube-api-access-jg6bf\") pod \"collect-profiles-29486505-zvlhx\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:00 crc kubenswrapper[4606]: I0123 17:45:00.526152 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:01 crc kubenswrapper[4606]: I0123 17:45:01.012206 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx"] Jan 23 17:45:01 crc kubenswrapper[4606]: I0123 17:45:01.393309 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" event={"ID":"12534266-507c-405c-a579-817e94fcadf2","Type":"ContainerStarted","Data":"f2356e58783a40c3725e0039cbeb072e1104e74eb27b066e88f97215a8b9fd9b"} Jan 23 17:45:01 crc kubenswrapper[4606]: I0123 17:45:01.393368 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" event={"ID":"12534266-507c-405c-a579-817e94fcadf2","Type":"ContainerStarted","Data":"bd107eee060fc27583fe2890c40793d41923d01b46662f06fd5d856f161d3077"} Jan 23 17:45:01 crc kubenswrapper[4606]: I0123 17:45:01.419010 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" podStartSLOduration=1.418979515 podStartE2EDuration="1.418979515s" podCreationTimestamp="2026-01-23 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:45:01.409965974 +0000 UTC m=+3135.410158380" watchObservedRunningTime="2026-01-23 17:45:01.418979515 +0000 UTC m=+3135.419171921" Jan 23 17:45:02 crc kubenswrapper[4606]: I0123 17:45:02.411403 4606 generic.go:334] "Generic (PLEG): container finished" podID="12534266-507c-405c-a579-817e94fcadf2" containerID="f2356e58783a40c3725e0039cbeb072e1104e74eb27b066e88f97215a8b9fd9b" exitCode=0 Jan 23 17:45:02 crc kubenswrapper[4606]: I0123 17:45:02.411683 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" event={"ID":"12534266-507c-405c-a579-817e94fcadf2","Type":"ContainerDied","Data":"f2356e58783a40c3725e0039cbeb072e1104e74eb27b066e88f97215a8b9fd9b"} Jan 23 17:45:03 crc kubenswrapper[4606]: I0123 17:45:03.829752 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:03 crc kubenswrapper[4606]: I0123 17:45:03.961482 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12534266-507c-405c-a579-817e94fcadf2-config-volume\") pod \"12534266-507c-405c-a579-817e94fcadf2\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " Jan 23 17:45:03 crc kubenswrapper[4606]: I0123 17:45:03.961542 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg6bf\" (UniqueName: \"kubernetes.io/projected/12534266-507c-405c-a579-817e94fcadf2-kube-api-access-jg6bf\") pod \"12534266-507c-405c-a579-817e94fcadf2\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " Jan 23 17:45:03 crc kubenswrapper[4606]: I0123 17:45:03.961622 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12534266-507c-405c-a579-817e94fcadf2-secret-volume\") pod \"12534266-507c-405c-a579-817e94fcadf2\" (UID: \"12534266-507c-405c-a579-817e94fcadf2\") " Jan 23 17:45:03 crc kubenswrapper[4606]: I0123 17:45:03.962208 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12534266-507c-405c-a579-817e94fcadf2-config-volume" (OuterVolumeSpecName: "config-volume") pod "12534266-507c-405c-a579-817e94fcadf2" (UID: "12534266-507c-405c-a579-817e94fcadf2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 17:45:03 crc kubenswrapper[4606]: I0123 17:45:03.967146 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12534266-507c-405c-a579-817e94fcadf2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12534266-507c-405c-a579-817e94fcadf2" (UID: "12534266-507c-405c-a579-817e94fcadf2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:03 crc kubenswrapper[4606]: I0123 17:45:03.967568 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12534266-507c-405c-a579-817e94fcadf2-kube-api-access-jg6bf" (OuterVolumeSpecName: "kube-api-access-jg6bf") pod "12534266-507c-405c-a579-817e94fcadf2" (UID: "12534266-507c-405c-a579-817e94fcadf2"). InnerVolumeSpecName "kube-api-access-jg6bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.064384 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12534266-507c-405c-a579-817e94fcadf2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.064419 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg6bf\" (UniqueName: \"kubernetes.io/projected/12534266-507c-405c-a579-817e94fcadf2-kube-api-access-jg6bf\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.064430 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12534266-507c-405c-a579-817e94fcadf2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.433344 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" event={"ID":"12534266-507c-405c-a579-817e94fcadf2","Type":"ContainerDied","Data":"bd107eee060fc27583fe2890c40793d41923d01b46662f06fd5d856f161d3077"} Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.433393 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd107eee060fc27583fe2890c40793d41923d01b46662f06fd5d856f161d3077" Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.433418 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx" Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.509734 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw"] Jan 23 17:45:04 crc kubenswrapper[4606]: I0123 17:45:04.518319 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486460-vmbzw"] Jan 23 17:45:06 crc kubenswrapper[4606]: I0123 17:45:06.394799 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65917137-b412-4309-97ac-899450da05f4" path="/var/lib/kubelet/pods/65917137-b412-4309-97ac-899450da05f4/volumes" Jan 23 17:45:13 crc kubenswrapper[4606]: I0123 17:45:13.365849 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:45:13 crc kubenswrapper[4606]: E0123 17:45:13.366949 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:45:24 crc kubenswrapper[4606]: I0123 17:45:24.366371 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:45:24 crc kubenswrapper[4606]: E0123 17:45:24.367071 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:45:33 crc kubenswrapper[4606]: I0123 17:45:33.932844 4606 scope.go:117] "RemoveContainer" containerID="e8adc24d9125a3d67c711a647589a5eb3df40d11cd4dd0afe988d823e0dabe96" Jan 23 17:45:38 crc kubenswrapper[4606]: I0123 17:45:38.365645 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:45:38 crc kubenswrapper[4606]: E0123 17:45:38.366518 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:45:51 crc kubenswrapper[4606]: I0123 17:45:51.938085 4606 generic.go:334] "Generic (PLEG): container finished" podID="c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" containerID="59ca6268c614de5f441a3309a6f66f80d9f5f8276fcfbbad4de50bcd107c0980" exitCode=0 Jan 23 17:45:51 crc kubenswrapper[4606]: I0123 17:45:51.938164 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" event={"ID":"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9","Type":"ContainerDied","Data":"59ca6268c614de5f441a3309a6f66f80d9f5f8276fcfbbad4de50bcd107c0980"} Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.366366 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:45:53 crc kubenswrapper[4606]: E0123 17:45:53.367105 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.367551 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.456334 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-1\") pod \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.456463 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg58f\" (UniqueName: \"kubernetes.io/projected/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-kube-api-access-vg58f\") pod \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.456517 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-inventory\") pod \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.456607 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ssh-key-openstack-edpm-ipam\") pod \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.456696 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-2\") pod \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.456745 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-telemetry-combined-ca-bundle\") pod \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.456791 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-0\") pod \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\" (UID: \"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9\") " Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.462436 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" (UID: "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.463736 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-kube-api-access-vg58f" (OuterVolumeSpecName: "kube-api-access-vg58f") pod "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" (UID: "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9"). InnerVolumeSpecName "kube-api-access-vg58f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.484656 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" (UID: "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.485114 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" (UID: "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.486043 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-inventory" (OuterVolumeSpecName: "inventory") pod "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" (UID: "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.504482 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" (UID: "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.505799 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" (UID: "c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.559298 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.559374 4606 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.559391 4606 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.559406 4606 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.559448 4606 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.559466 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg58f\" (UniqueName: \"kubernetes.io/projected/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-kube-api-access-vg58f\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.559479 4606 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.972697 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" event={"ID":"c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9","Type":"ContainerDied","Data":"b58bc376ceeb7f7e97ca5db48cbd2c5e5e625f43343e87e845e56ac4fc00b672"} Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.973078 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58bc376ceeb7f7e97ca5db48cbd2c5e5e625f43343e87e845e56ac4fc00b672" Jan 23 17:45:53 crc kubenswrapper[4606]: I0123 17:45:53.973159 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xv982" Jan 23 17:46:05 crc kubenswrapper[4606]: I0123 17:46:05.367618 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:46:05 crc kubenswrapper[4606]: E0123 17:46:05.368327 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:46:16 crc kubenswrapper[4606]: I0123 17:46:16.375158 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:46:16 crc kubenswrapper[4606]: E0123 17:46:16.376238 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:46:31 crc kubenswrapper[4606]: I0123 17:46:31.366655 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:46:31 crc kubenswrapper[4606]: E0123 17:46:31.367624 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:46:46 crc kubenswrapper[4606]: I0123 17:46:46.374113 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:46:46 crc kubenswrapper[4606]: E0123 17:46:46.374929 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.022274 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7hq"] Jan 23 17:46:58 crc kubenswrapper[4606]: E0123 17:46:58.024076 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12534266-507c-405c-a579-817e94fcadf2" containerName="collect-profiles" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.024090 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="12534266-507c-405c-a579-817e94fcadf2" containerName="collect-profiles" Jan 23 17:46:58 crc kubenswrapper[4606]: E0123 17:46:58.024112 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.024119 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.024315 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="12534266-507c-405c-a579-817e94fcadf2" containerName="collect-profiles" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.024332 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.025918 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.055941 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7hq"] Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.162987 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2wh\" (UniqueName: \"kubernetes.io/projected/7b40635e-2a01-4120-a283-a70c39ca0e20-kube-api-access-jg2wh\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.163031 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-utilities\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.163185 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-catalog-content\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.265185 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-catalog-content\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.265373 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg2wh\" (UniqueName: \"kubernetes.io/projected/7b40635e-2a01-4120-a283-a70c39ca0e20-kube-api-access-jg2wh\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.265394 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-utilities\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.265963 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-utilities\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.266138 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-catalog-content\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.300086 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg2wh\" (UniqueName: \"kubernetes.io/projected/7b40635e-2a01-4120-a283-a70c39ca0e20-kube-api-access-jg2wh\") pod \"redhat-marketplace-xj7hq\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.350350 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:46:58 crc kubenswrapper[4606]: I0123 17:46:58.821090 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7hq"] Jan 23 17:46:59 crc kubenswrapper[4606]: I0123 17:46:59.634321 4606 generic.go:334] "Generic (PLEG): container finished" podID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerID="1ec0cb677c8806ef73ac2bdd5234a49e0647acd2539f356ea5e7ce2c0c77cdfe" exitCode=0 Jan 23 17:46:59 crc kubenswrapper[4606]: I0123 17:46:59.634462 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7hq" event={"ID":"7b40635e-2a01-4120-a283-a70c39ca0e20","Type":"ContainerDied","Data":"1ec0cb677c8806ef73ac2bdd5234a49e0647acd2539f356ea5e7ce2c0c77cdfe"} Jan 23 17:46:59 crc kubenswrapper[4606]: I0123 17:46:59.634629 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7hq" event={"ID":"7b40635e-2a01-4120-a283-a70c39ca0e20","Type":"ContainerStarted","Data":"99c2aa1e075c10928673c165d720f8ec7939ab8c5299ad26b91db3a47c544b2c"} Jan 23 17:46:59 crc kubenswrapper[4606]: I0123 17:46:59.636319 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.365806 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:47:00 crc kubenswrapper[4606]: E0123 17:47:00.366296 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.598978 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.601280 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.605099 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.605116 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k4hqv" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.607678 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.608304 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.622293 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.731225 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztc5d\" (UniqueName: \"kubernetes.io/projected/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-kube-api-access-ztc5d\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.731993 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.732097 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.732251 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-config-data\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.732525 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.732619 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.732647 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.732756 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.732919 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837099 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837175 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztc5d\" (UniqueName: \"kubernetes.io/projected/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-kube-api-access-ztc5d\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837200 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837276 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837318 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-config-data\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837377 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837409 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837431 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.837474 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.838422 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.838523 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.838831 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.839755 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.840457 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-config-data\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.847094 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.848754 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.862920 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.865468 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztc5d\" (UniqueName: \"kubernetes.io/projected/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-kube-api-access-ztc5d\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.871263 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " pod="openstack/tempest-tests-tempest" Jan 23 17:47:00 crc kubenswrapper[4606]: I0123 17:47:00.917567 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 17:47:01 crc kubenswrapper[4606]: I0123 17:47:01.188172 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 17:47:01 crc kubenswrapper[4606]: I0123 17:47:01.659964 4606 generic.go:334] "Generic (PLEG): container finished" podID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerID="1864fe57d3666f8d9c75c8c3b6b9ea6c86c9baae4c3460edd7e09deae64ec27f" exitCode=0 Jan 23 17:47:01 crc kubenswrapper[4606]: I0123 17:47:01.660484 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7hq" event={"ID":"7b40635e-2a01-4120-a283-a70c39ca0e20","Type":"ContainerDied","Data":"1864fe57d3666f8d9c75c8c3b6b9ea6c86c9baae4c3460edd7e09deae64ec27f"} Jan 23 17:47:01 crc kubenswrapper[4606]: I0123 17:47:01.662278 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71","Type":"ContainerStarted","Data":"27f1e60ee3e1e290f7c82633527efdf508200795f22af5a97fd8df7246c0086f"} Jan 23 17:47:03 crc kubenswrapper[4606]: I0123 17:47:03.701886 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7hq" event={"ID":"7b40635e-2a01-4120-a283-a70c39ca0e20","Type":"ContainerStarted","Data":"09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca"} Jan 23 17:47:03 crc kubenswrapper[4606]: I0123 17:47:03.721172 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xj7hq" podStartSLOduration=3.969194811 podStartE2EDuration="6.721158838s" podCreationTimestamp="2026-01-23 17:46:57 +0000 UTC" firstStartedPulling="2026-01-23 17:46:59.635979152 +0000 UTC m=+3253.636171548" lastFinishedPulling="2026-01-23 17:47:02.387943169 +0000 UTC m=+3256.388135575" observedRunningTime="2026-01-23 17:47:03.720961662 +0000 UTC m=+3257.721154098" watchObservedRunningTime="2026-01-23 17:47:03.721158838 +0000 UTC m=+3257.721351244" Jan 23 17:47:08 crc kubenswrapper[4606]: I0123 17:47:08.155007 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wd7jn" podUID="00160898-cf32-4d05-8690-3dc95b50ca0e" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 17:47:08 crc kubenswrapper[4606]: I0123 17:47:08.351338 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:47:08 crc kubenswrapper[4606]: I0123 17:47:08.352389 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:47:08 crc kubenswrapper[4606]: I0123 17:47:08.423739 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:47:08 crc kubenswrapper[4606]: I0123 17:47:08.808260 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:47:08 crc kubenswrapper[4606]: I0123 17:47:08.922154 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7hq"] Jan 23 17:47:10 crc kubenswrapper[4606]: I0123 17:47:10.766767 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xj7hq" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="registry-server" containerID="cri-o://09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" gracePeriod=2 Jan 23 17:47:11 crc kubenswrapper[4606]: I0123 17:47:11.831764 4606 generic.go:334] "Generic (PLEG): container finished" podID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" exitCode=0 Jan 23 17:47:11 crc kubenswrapper[4606]: I0123 17:47:11.831806 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7hq" event={"ID":"7b40635e-2a01-4120-a283-a70c39ca0e20","Type":"ContainerDied","Data":"09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca"} Jan 23 17:47:13 crc kubenswrapper[4606]: I0123 17:47:13.365766 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:47:13 crc kubenswrapper[4606]: E0123 17:47:13.366637 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:47:18 crc kubenswrapper[4606]: E0123 17:47:18.352201 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:18 crc kubenswrapper[4606]: E0123 17:47:18.353805 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:18 crc kubenswrapper[4606]: E0123 17:47:18.354549 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:18 crc kubenswrapper[4606]: E0123 17:47:18.354580 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xj7hq" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="registry-server" Jan 23 17:47:27 crc kubenswrapper[4606]: I0123 17:47:27.365371 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:47:27 crc kubenswrapper[4606]: E0123 17:47:27.366165 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:47:28 crc kubenswrapper[4606]: E0123 17:47:28.351377 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:28 crc kubenswrapper[4606]: E0123 17:47:28.352263 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:28 crc kubenswrapper[4606]: E0123 17:47:28.352606 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:28 crc kubenswrapper[4606]: E0123 17:47:28.352646 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xj7hq" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="registry-server" Jan 23 17:47:34 crc kubenswrapper[4606]: I0123 17:47:34.092330 4606 scope.go:117] "RemoveContainer" containerID="1685b8624a8d11d248d987a90270f5a9f7733a20f9847d81950d5651013181c3" Jan 23 17:47:38 crc kubenswrapper[4606]: E0123 17:47:38.351450 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:38 crc kubenswrapper[4606]: E0123 17:47:38.352713 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:38 crc kubenswrapper[4606]: E0123 17:47:38.353062 4606 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 17:47:38 crc kubenswrapper[4606]: E0123 17:47:38.353119 4606 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xj7hq" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="registry-server" Jan 23 17:47:38 crc kubenswrapper[4606]: I0123 17:47:38.366795 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:47:38 crc kubenswrapper[4606]: E0123 17:47:38.367020 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:47:47 crc kubenswrapper[4606]: E0123 17:47:47.937607 4606 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 23 17:47:47 crc kubenswrapper[4606]: E0123 17:47:47.939472 4606 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztc5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9c7b5ed0-95db-4e9a-b824-a6dc9596dd71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 17:47:47 crc kubenswrapper[4606]: E0123 17:47:47.940627 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.274155 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xj7hq" event={"ID":"7b40635e-2a01-4120-a283-a70c39ca0e20","Type":"ContainerDied","Data":"99c2aa1e075c10928673c165d720f8ec7939ab8c5299ad26b91db3a47c544b2c"} Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.274487 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c2aa1e075c10928673c165d720f8ec7939ab8c5299ad26b91db3a47c544b2c" Jan 23 17:47:48 crc kubenswrapper[4606]: E0123 17:47:48.283936 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.302085 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.460260 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-utilities\") pod \"7b40635e-2a01-4120-a283-a70c39ca0e20\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.460387 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-catalog-content\") pod \"7b40635e-2a01-4120-a283-a70c39ca0e20\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.460615 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg2wh\" (UniqueName: \"kubernetes.io/projected/7b40635e-2a01-4120-a283-a70c39ca0e20-kube-api-access-jg2wh\") pod \"7b40635e-2a01-4120-a283-a70c39ca0e20\" (UID: \"7b40635e-2a01-4120-a283-a70c39ca0e20\") " Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.461076 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-utilities" (OuterVolumeSpecName: "utilities") pod "7b40635e-2a01-4120-a283-a70c39ca0e20" (UID: "7b40635e-2a01-4120-a283-a70c39ca0e20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.461333 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.466652 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b40635e-2a01-4120-a283-a70c39ca0e20-kube-api-access-jg2wh" (OuterVolumeSpecName: "kube-api-access-jg2wh") pod "7b40635e-2a01-4120-a283-a70c39ca0e20" (UID: "7b40635e-2a01-4120-a283-a70c39ca0e20"). InnerVolumeSpecName "kube-api-access-jg2wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.472495 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b40635e-2a01-4120-a283-a70c39ca0e20" (UID: "7b40635e-2a01-4120-a283-a70c39ca0e20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.563925 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b40635e-2a01-4120-a283-a70c39ca0e20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:47:48 crc kubenswrapper[4606]: I0123 17:47:48.565115 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg2wh\" (UniqueName: \"kubernetes.io/projected/7b40635e-2a01-4120-a283-a70c39ca0e20-kube-api-access-jg2wh\") on node \"crc\" DevicePath \"\"" Jan 23 17:47:49 crc kubenswrapper[4606]: I0123 17:47:49.291288 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xj7hq" Jan 23 17:47:49 crc kubenswrapper[4606]: I0123 17:47:49.336015 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7hq"] Jan 23 17:47:49 crc kubenswrapper[4606]: I0123 17:47:49.354754 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xj7hq"] Jan 23 17:47:49 crc kubenswrapper[4606]: I0123 17:47:49.365284 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:47:49 crc kubenswrapper[4606]: E0123 17:47:49.365645 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:47:50 crc kubenswrapper[4606]: I0123 17:47:50.376092 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" path="/var/lib/kubelet/pods/7b40635e-2a01-4120-a283-a70c39ca0e20/volumes" Jan 23 17:48:02 crc kubenswrapper[4606]: I0123 17:48:02.848813 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 17:48:03 crc kubenswrapper[4606]: I0123 17:48:03.365808 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:48:03 crc kubenswrapper[4606]: E0123 17:48:03.366448 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:48:04 crc kubenswrapper[4606]: I0123 17:48:04.450794 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71","Type":"ContainerStarted","Data":"2372aca5dabf9579ef99020b8497e3df6495f569c064ea17a893100abbb846e0"} Jan 23 17:48:04 crc kubenswrapper[4606]: I0123 17:48:04.474408 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.811807226 podStartE2EDuration="1m5.474389081s" podCreationTimestamp="2026-01-23 17:46:59 +0000 UTC" firstStartedPulling="2026-01-23 17:47:01.182843021 +0000 UTC m=+3255.183035427" lastFinishedPulling="2026-01-23 17:48:02.845424876 +0000 UTC m=+3316.845617282" observedRunningTime="2026-01-23 17:48:04.467441892 +0000 UTC m=+3318.467634298" watchObservedRunningTime="2026-01-23 17:48:04.474389081 +0000 UTC m=+3318.474581487" Jan 23 17:48:15 crc kubenswrapper[4606]: I0123 17:48:15.366421 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:48:15 crc kubenswrapper[4606]: E0123 17:48:15.367380 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:48:28 crc kubenswrapper[4606]: I0123 17:48:28.367528 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:48:28 crc kubenswrapper[4606]: E0123 17:48:28.368257 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:48:41 crc kubenswrapper[4606]: I0123 17:48:41.365846 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:48:41 crc kubenswrapper[4606]: E0123 17:48:41.366631 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:48:47 crc kubenswrapper[4606]: I0123 17:48:47.905185 4606 scope.go:117] "RemoveContainer" containerID="3ba3b631c614191068a9e89682c8e90c21c02228b55ccd4e3cd1c9153e200a59" Jan 23 17:48:47 crc kubenswrapper[4606]: I0123 17:48:47.969927 4606 scope.go:117] "RemoveContainer" containerID="7313c2c73abb9487ca2a8f18660a96e0852553e4821aefc8969edce526e7673e" Jan 23 17:48:55 crc kubenswrapper[4606]: I0123 17:48:55.366144 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:48:55 crc kubenswrapper[4606]: E0123 17:48:55.368636 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:49:08 crc kubenswrapper[4606]: I0123 17:49:08.367001 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:49:08 crc kubenswrapper[4606]: E0123 17:49:08.367742 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:49:19 crc kubenswrapper[4606]: I0123 17:49:19.366157 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:49:19 crc kubenswrapper[4606]: E0123 17:49:19.367113 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:49:31 crc kubenswrapper[4606]: I0123 17:49:31.366331 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:49:31 crc kubenswrapper[4606]: E0123 17:49:31.367172 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:49:46 crc kubenswrapper[4606]: I0123 17:49:46.383908 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:49:46 crc kubenswrapper[4606]: E0123 17:49:46.384935 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:50:01 crc kubenswrapper[4606]: I0123 17:50:01.365494 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:50:02 crc kubenswrapper[4606]: I0123 17:50:02.756424 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"bbf8c33a355ea39d0592857bee43fdbd252655274b417873c23d637ab24a621a"} Jan 23 17:51:09 crc kubenswrapper[4606]: I0123 17:51:09.686801 4606 patch_prober.go:28] interesting pod/controller-manager-64798f646d-zr8l4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 17:51:09 crc kubenswrapper[4606]: I0123 17:51:09.687475 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-64798f646d-zr8l4" podUID="5fa74e1f-8662-49be-960e-9e41e648a3b7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 17:51:49 crc kubenswrapper[4606]: I0123 17:51:49.983165 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qmtsz"] Jan 23 17:51:49 crc kubenswrapper[4606]: E0123 17:51:49.984072 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="extract-utilities" Jan 23 17:51:49 crc kubenswrapper[4606]: I0123 17:51:49.984086 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="extract-utilities" Jan 23 17:51:49 crc kubenswrapper[4606]: E0123 17:51:49.984114 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="registry-server" Jan 23 17:51:49 crc kubenswrapper[4606]: I0123 17:51:49.984121 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="registry-server" Jan 23 17:51:49 crc kubenswrapper[4606]: E0123 17:51:49.984135 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="extract-content" Jan 23 17:51:49 crc kubenswrapper[4606]: I0123 17:51:49.984142 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="extract-content" Jan 23 17:51:49 crc kubenswrapper[4606]: I0123 17:51:49.984345 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b40635e-2a01-4120-a283-a70c39ca0e20" containerName="registry-server" Jan 23 17:51:49 crc kubenswrapper[4606]: I0123 17:51:49.989023 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.024250 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-catalog-content\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.024290 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zsnm\" (UniqueName: \"kubernetes.io/projected/8ae3b06d-4744-419a-9c98-ac1380ca712a-kube-api-access-2zsnm\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.025743 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-utilities\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.064667 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmtsz"] Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.127787 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-utilities\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.127905 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-catalog-content\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.127927 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zsnm\" (UniqueName: \"kubernetes.io/projected/8ae3b06d-4744-419a-9c98-ac1380ca712a-kube-api-access-2zsnm\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.128579 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-utilities\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.128805 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-catalog-content\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.148383 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zsnm\" (UniqueName: \"kubernetes.io/projected/8ae3b06d-4744-419a-9c98-ac1380ca712a-kube-api-access-2zsnm\") pod \"community-operators-qmtsz\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:50 crc kubenswrapper[4606]: I0123 17:51:50.319151 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:51:51 crc kubenswrapper[4606]: I0123 17:51:51.384394 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qmtsz"] Jan 23 17:51:52 crc kubenswrapper[4606]: I0123 17:51:52.169376 4606 generic.go:334] "Generic (PLEG): container finished" podID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerID="d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8" exitCode=0 Jan 23 17:51:52 crc kubenswrapper[4606]: I0123 17:51:52.169418 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmtsz" event={"ID":"8ae3b06d-4744-419a-9c98-ac1380ca712a","Type":"ContainerDied","Data":"d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8"} Jan 23 17:51:52 crc kubenswrapper[4606]: I0123 17:51:52.169442 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmtsz" event={"ID":"8ae3b06d-4744-419a-9c98-ac1380ca712a","Type":"ContainerStarted","Data":"232eb09e306dec45ae4a53ed4caaaa8ea191c2e0a12c4f2d02dc73bc2457d898"} Jan 23 17:51:54 crc kubenswrapper[4606]: I0123 17:51:54.186732 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmtsz" event={"ID":"8ae3b06d-4744-419a-9c98-ac1380ca712a","Type":"ContainerStarted","Data":"35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740"} Jan 23 17:51:55 crc kubenswrapper[4606]: I0123 17:51:55.198133 4606 generic.go:334] "Generic (PLEG): container finished" podID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerID="35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740" exitCode=0 Jan 23 17:51:55 crc kubenswrapper[4606]: I0123 17:51:55.198195 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmtsz" event={"ID":"8ae3b06d-4744-419a-9c98-ac1380ca712a","Type":"ContainerDied","Data":"35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740"} Jan 23 17:51:56 crc kubenswrapper[4606]: I0123 17:51:56.208509 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmtsz" event={"ID":"8ae3b06d-4744-419a-9c98-ac1380ca712a","Type":"ContainerStarted","Data":"4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c"} Jan 23 17:51:56 crc kubenswrapper[4606]: I0123 17:51:56.235111 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qmtsz" podStartSLOduration=3.815029582 podStartE2EDuration="7.235092596s" podCreationTimestamp="2026-01-23 17:51:49 +0000 UTC" firstStartedPulling="2026-01-23 17:51:52.172388538 +0000 UTC m=+3546.172580964" lastFinishedPulling="2026-01-23 17:51:55.592451572 +0000 UTC m=+3549.592643978" observedRunningTime="2026-01-23 17:51:56.229377557 +0000 UTC m=+3550.229569983" watchObservedRunningTime="2026-01-23 17:51:56.235092596 +0000 UTC m=+3550.235285002" Jan 23 17:51:58 crc kubenswrapper[4606]: I0123 17:51:58.885282 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kf4j8"] Jan 23 17:51:58 crc kubenswrapper[4606]: I0123 17:51:58.887508 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:58 crc kubenswrapper[4606]: I0123 17:51:58.919783 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kf4j8"] Jan 23 17:51:58 crc kubenswrapper[4606]: I0123 17:51:58.992868 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzklq\" (UniqueName: \"kubernetes.io/projected/57754903-cf64-4244-8402-580040ed6192-kube-api-access-jzklq\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:58 crc kubenswrapper[4606]: I0123 17:51:58.992932 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-catalog-content\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:58 crc kubenswrapper[4606]: I0123 17:51:58.993123 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-utilities\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.094763 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzklq\" (UniqueName: \"kubernetes.io/projected/57754903-cf64-4244-8402-580040ed6192-kube-api-access-jzklq\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.094830 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-catalog-content\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.094889 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-utilities\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.095554 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-catalog-content\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.095570 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-utilities\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.122549 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzklq\" (UniqueName: \"kubernetes.io/projected/57754903-cf64-4244-8402-580040ed6192-kube-api-access-jzklq\") pod \"redhat-operators-kf4j8\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.217147 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:51:59 crc kubenswrapper[4606]: W0123 17:51:59.759008 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57754903_cf64_4244_8402_580040ed6192.slice/crio-ffb88e35c4d40b02e6346c2e9e0505f3ccdbe5dcfaf57f56d23fd5d354833998 WatchSource:0}: Error finding container ffb88e35c4d40b02e6346c2e9e0505f3ccdbe5dcfaf57f56d23fd5d354833998: Status 404 returned error can't find the container with id ffb88e35c4d40b02e6346c2e9e0505f3ccdbe5dcfaf57f56d23fd5d354833998 Jan 23 17:51:59 crc kubenswrapper[4606]: I0123 17:51:59.774949 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kf4j8"] Jan 23 17:52:00 crc kubenswrapper[4606]: I0123 17:52:00.243503 4606 generic.go:334] "Generic (PLEG): container finished" podID="57754903-cf64-4244-8402-580040ed6192" containerID="5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c" exitCode=0 Jan 23 17:52:00 crc kubenswrapper[4606]: I0123 17:52:00.243575 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kf4j8" event={"ID":"57754903-cf64-4244-8402-580040ed6192","Type":"ContainerDied","Data":"5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c"} Jan 23 17:52:00 crc kubenswrapper[4606]: I0123 17:52:00.243808 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kf4j8" event={"ID":"57754903-cf64-4244-8402-580040ed6192","Type":"ContainerStarted","Data":"ffb88e35c4d40b02e6346c2e9e0505f3ccdbe5dcfaf57f56d23fd5d354833998"} Jan 23 17:52:00 crc kubenswrapper[4606]: I0123 17:52:00.245765 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:52:00 crc kubenswrapper[4606]: I0123 17:52:00.319556 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:52:00 crc kubenswrapper[4606]: I0123 17:52:00.320719 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:52:00 crc kubenswrapper[4606]: I0123 17:52:00.380218 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:52:01 crc kubenswrapper[4606]: I0123 17:52:01.304795 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:52:02 crc kubenswrapper[4606]: I0123 17:52:02.264467 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kf4j8" event={"ID":"57754903-cf64-4244-8402-580040ed6192","Type":"ContainerStarted","Data":"d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec"} Jan 23 17:52:02 crc kubenswrapper[4606]: I0123 17:52:02.667068 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qmtsz"] Jan 23 17:52:04 crc kubenswrapper[4606]: I0123 17:52:04.403485 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qmtsz" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="registry-server" containerID="cri-o://4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c" gracePeriod=2 Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.217345 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.258919 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zsnm\" (UniqueName: \"kubernetes.io/projected/8ae3b06d-4744-419a-9c98-ac1380ca712a-kube-api-access-2zsnm\") pod \"8ae3b06d-4744-419a-9c98-ac1380ca712a\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.258967 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-utilities\") pod \"8ae3b06d-4744-419a-9c98-ac1380ca712a\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.259118 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-catalog-content\") pod \"8ae3b06d-4744-419a-9c98-ac1380ca712a\" (UID: \"8ae3b06d-4744-419a-9c98-ac1380ca712a\") " Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.260771 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-utilities" (OuterVolumeSpecName: "utilities") pod "8ae3b06d-4744-419a-9c98-ac1380ca712a" (UID: "8ae3b06d-4744-419a-9c98-ac1380ca712a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.266800 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae3b06d-4744-419a-9c98-ac1380ca712a-kube-api-access-2zsnm" (OuterVolumeSpecName: "kube-api-access-2zsnm") pod "8ae3b06d-4744-419a-9c98-ac1380ca712a" (UID: "8ae3b06d-4744-419a-9c98-ac1380ca712a"). InnerVolumeSpecName "kube-api-access-2zsnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.297484 4606 generic.go:334] "Generic (PLEG): container finished" podID="57754903-cf64-4244-8402-580040ed6192" containerID="d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec" exitCode=0 Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.297530 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kf4j8" event={"ID":"57754903-cf64-4244-8402-580040ed6192","Type":"ContainerDied","Data":"d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec"} Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.302178 4606 generic.go:334] "Generic (PLEG): container finished" podID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerID="4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c" exitCode=0 Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.302225 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmtsz" event={"ID":"8ae3b06d-4744-419a-9c98-ac1380ca712a","Type":"ContainerDied","Data":"4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c"} Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.302220 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qmtsz" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.302253 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qmtsz" event={"ID":"8ae3b06d-4744-419a-9c98-ac1380ca712a","Type":"ContainerDied","Data":"232eb09e306dec45ae4a53ed4caaaa8ea191c2e0a12c4f2d02dc73bc2457d898"} Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.302272 4606 scope.go:117] "RemoveContainer" containerID="4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.455440 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zsnm\" (UniqueName: \"kubernetes.io/projected/8ae3b06d-4744-419a-9c98-ac1380ca712a-kube-api-access-2zsnm\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.455467 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.465682 4606 scope.go:117] "RemoveContainer" containerID="35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.479713 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ae3b06d-4744-419a-9c98-ac1380ca712a" (UID: "8ae3b06d-4744-419a-9c98-ac1380ca712a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.497262 4606 scope.go:117] "RemoveContainer" containerID="d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.544163 4606 scope.go:117] "RemoveContainer" containerID="4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c" Jan 23 17:52:05 crc kubenswrapper[4606]: E0123 17:52:05.544931 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c\": container with ID starting with 4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c not found: ID does not exist" containerID="4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.544974 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c"} err="failed to get container status \"4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c\": rpc error: code = NotFound desc = could not find container \"4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c\": container with ID starting with 4294e89f03f89bfe40ab49e63953ddc7fd8f57f3a6bbaa4aa971bf8701093f4c not found: ID does not exist" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.545078 4606 scope.go:117] "RemoveContainer" containerID="35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740" Jan 23 17:52:05 crc kubenswrapper[4606]: E0123 17:52:05.545477 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740\": container with ID starting with 35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740 not found: ID does not exist" containerID="35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.545509 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740"} err="failed to get container status \"35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740\": rpc error: code = NotFound desc = could not find container \"35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740\": container with ID starting with 35041e63a659c38d146bf2df77381fa5e81051bed8dc47a58f285ebbc8f47740 not found: ID does not exist" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.545527 4606 scope.go:117] "RemoveContainer" containerID="d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8" Jan 23 17:52:05 crc kubenswrapper[4606]: E0123 17:52:05.545780 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8\": container with ID starting with d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8 not found: ID does not exist" containerID="d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.545808 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8"} err="failed to get container status \"d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8\": rpc error: code = NotFound desc = could not find container \"d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8\": container with ID starting with d0c4c07f8f323f2a60268bb60d50a3234348ecc0342508083d0dccc5524818e8 not found: ID does not exist" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.557384 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ae3b06d-4744-419a-9c98-ac1380ca712a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.639937 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qmtsz"] Jan 23 17:52:05 crc kubenswrapper[4606]: I0123 17:52:05.648357 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qmtsz"] Jan 23 17:52:06 crc kubenswrapper[4606]: I0123 17:52:06.312873 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kf4j8" event={"ID":"57754903-cf64-4244-8402-580040ed6192","Type":"ContainerStarted","Data":"77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd"} Jan 23 17:52:06 crc kubenswrapper[4606]: I0123 17:52:06.366803 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kf4j8" podStartSLOduration=2.709783276 podStartE2EDuration="8.366783325s" podCreationTimestamp="2026-01-23 17:51:58 +0000 UTC" firstStartedPulling="2026-01-23 17:52:00.245490433 +0000 UTC m=+3554.245682839" lastFinishedPulling="2026-01-23 17:52:05.902490452 +0000 UTC m=+3559.902682888" observedRunningTime="2026-01-23 17:52:06.363498786 +0000 UTC m=+3560.363691192" watchObservedRunningTime="2026-01-23 17:52:06.366783325 +0000 UTC m=+3560.366975731" Jan 23 17:52:06 crc kubenswrapper[4606]: I0123 17:52:06.385051 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" path="/var/lib/kubelet/pods/8ae3b06d-4744-419a-9c98-ac1380ca712a/volumes" Jan 23 17:52:09 crc kubenswrapper[4606]: I0123 17:52:09.217318 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:52:09 crc kubenswrapper[4606]: I0123 17:52:09.218527 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:52:10 crc kubenswrapper[4606]: I0123 17:52:10.269226 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kf4j8" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="registry-server" probeResult="failure" output=< Jan 23 17:52:10 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 17:52:10 crc kubenswrapper[4606]: > Jan 23 17:52:19 crc kubenswrapper[4606]: I0123 17:52:19.269524 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:52:19 crc kubenswrapper[4606]: I0123 17:52:19.320139 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:52:19 crc kubenswrapper[4606]: I0123 17:52:19.506765 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kf4j8"] Jan 23 17:52:20 crc kubenswrapper[4606]: I0123 17:52:20.439264 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kf4j8" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="registry-server" containerID="cri-o://77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd" gracePeriod=2 Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.022982 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.194881 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-utilities\") pod \"57754903-cf64-4244-8402-580040ed6192\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.195032 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-catalog-content\") pod \"57754903-cf64-4244-8402-580040ed6192\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.195181 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzklq\" (UniqueName: \"kubernetes.io/projected/57754903-cf64-4244-8402-580040ed6192-kube-api-access-jzklq\") pod \"57754903-cf64-4244-8402-580040ed6192\" (UID: \"57754903-cf64-4244-8402-580040ed6192\") " Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.196508 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-utilities" (OuterVolumeSpecName: "utilities") pod "57754903-cf64-4244-8402-580040ed6192" (UID: "57754903-cf64-4244-8402-580040ed6192"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.201757 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57754903-cf64-4244-8402-580040ed6192-kube-api-access-jzklq" (OuterVolumeSpecName: "kube-api-access-jzklq") pod "57754903-cf64-4244-8402-580040ed6192" (UID: "57754903-cf64-4244-8402-580040ed6192"). InnerVolumeSpecName "kube-api-access-jzklq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.298399 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzklq\" (UniqueName: \"kubernetes.io/projected/57754903-cf64-4244-8402-580040ed6192-kube-api-access-jzklq\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.298453 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.363890 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57754903-cf64-4244-8402-580040ed6192" (UID: "57754903-cf64-4244-8402-580040ed6192"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.400255 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57754903-cf64-4244-8402-580040ed6192-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.448622 4606 generic.go:334] "Generic (PLEG): container finished" podID="57754903-cf64-4244-8402-580040ed6192" containerID="77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd" exitCode=0 Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.448678 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kf4j8" event={"ID":"57754903-cf64-4244-8402-580040ed6192","Type":"ContainerDied","Data":"77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd"} Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.448716 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kf4j8" event={"ID":"57754903-cf64-4244-8402-580040ed6192","Type":"ContainerDied","Data":"ffb88e35c4d40b02e6346c2e9e0505f3ccdbe5dcfaf57f56d23fd5d354833998"} Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.448744 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kf4j8" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.448755 4606 scope.go:117] "RemoveContainer" containerID="77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.482659 4606 scope.go:117] "RemoveContainer" containerID="d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.496805 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kf4j8"] Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.509675 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kf4j8"] Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.532101 4606 scope.go:117] "RemoveContainer" containerID="5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.578143 4606 scope.go:117] "RemoveContainer" containerID="77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd" Jan 23 17:52:21 crc kubenswrapper[4606]: E0123 17:52:21.578805 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd\": container with ID starting with 77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd not found: ID does not exist" containerID="77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.578837 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd"} err="failed to get container status \"77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd\": rpc error: code = NotFound desc = could not find container \"77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd\": container with ID starting with 77661fa381e63e6347d5205509fc35fbf205b8b386d6258503b163f46b5ac2dd not found: ID does not exist" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.578858 4606 scope.go:117] "RemoveContainer" containerID="d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec" Jan 23 17:52:21 crc kubenswrapper[4606]: E0123 17:52:21.579124 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec\": container with ID starting with d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec not found: ID does not exist" containerID="d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.579144 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec"} err="failed to get container status \"d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec\": rpc error: code = NotFound desc = could not find container \"d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec\": container with ID starting with d5f77168f102954701f3946df7d22e46083c431f66dc7a9bc308559d03d0b4ec not found: ID does not exist" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.579156 4606 scope.go:117] "RemoveContainer" containerID="5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c" Jan 23 17:52:21 crc kubenswrapper[4606]: E0123 17:52:21.579389 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c\": container with ID starting with 5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c not found: ID does not exist" containerID="5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c" Jan 23 17:52:21 crc kubenswrapper[4606]: I0123 17:52:21.579410 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c"} err="failed to get container status \"5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c\": rpc error: code = NotFound desc = could not find container \"5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c\": container with ID starting with 5d0dd167404294d3bc84029bcab4e17e9fc23b508571d03beb13cc86797de05c not found: ID does not exist" Jan 23 17:52:22 crc kubenswrapper[4606]: I0123 17:52:22.377731 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57754903-cf64-4244-8402-580040ed6192" path="/var/lib/kubelet/pods/57754903-cf64-4244-8402-580040ed6192/volumes" Jan 23 17:52:29 crc kubenswrapper[4606]: I0123 17:52:29.934368 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:52:29 crc kubenswrapper[4606]: I0123 17:52:29.936279 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:52:59 crc kubenswrapper[4606]: I0123 17:52:59.934618 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:52:59 crc kubenswrapper[4606]: I0123 17:52:59.935188 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.792900 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzgrp"] Jan 23 17:53:17 crc kubenswrapper[4606]: E0123 17:53:17.793651 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="extract-content" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793665 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="extract-content" Jan 23 17:53:17 crc kubenswrapper[4606]: E0123 17:53:17.793681 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="registry-server" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793698 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="registry-server" Jan 23 17:53:17 crc kubenswrapper[4606]: E0123 17:53:17.793712 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="registry-server" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793742 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="registry-server" Jan 23 17:53:17 crc kubenswrapper[4606]: E0123 17:53:17.793758 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="extract-utilities" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793764 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="extract-utilities" Jan 23 17:53:17 crc kubenswrapper[4606]: E0123 17:53:17.793777 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="extract-utilities" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793782 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="extract-utilities" Jan 23 17:53:17 crc kubenswrapper[4606]: E0123 17:53:17.793803 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="extract-content" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793809 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="extract-content" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793962 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae3b06d-4744-419a-9c98-ac1380ca712a" containerName="registry-server" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.793990 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="57754903-cf64-4244-8402-580040ed6192" containerName="registry-server" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.800797 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.803543 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzgrp"] Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.809378 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzpwd\" (UniqueName: \"kubernetes.io/projected/da831932-52c8-4527-9cf3-3094bdb2dd76-kube-api-access-jzpwd\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.809437 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-utilities\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.809488 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-catalog-content\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.912084 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzpwd\" (UniqueName: \"kubernetes.io/projected/da831932-52c8-4527-9cf3-3094bdb2dd76-kube-api-access-jzpwd\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.912185 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-utilities\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.912259 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-catalog-content\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.912845 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-catalog-content\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.912902 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-utilities\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:17 crc kubenswrapper[4606]: I0123 17:53:17.934422 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzpwd\" (UniqueName: \"kubernetes.io/projected/da831932-52c8-4527-9cf3-3094bdb2dd76-kube-api-access-jzpwd\") pod \"certified-operators-hzgrp\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:18 crc kubenswrapper[4606]: I0123 17:53:18.121621 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:18 crc kubenswrapper[4606]: I0123 17:53:18.767897 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzgrp"] Jan 23 17:53:18 crc kubenswrapper[4606]: I0123 17:53:18.952761 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrp" event={"ID":"da831932-52c8-4527-9cf3-3094bdb2dd76","Type":"ContainerStarted","Data":"08e5300ca7c202cb65c8f30e5518e40db259b9ffcfe1ea04eafafcbd47c775f5"} Jan 23 17:53:19 crc kubenswrapper[4606]: I0123 17:53:19.962897 4606 generic.go:334] "Generic (PLEG): container finished" podID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerID="8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b" exitCode=0 Jan 23 17:53:19 crc kubenswrapper[4606]: I0123 17:53:19.962983 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrp" event={"ID":"da831932-52c8-4527-9cf3-3094bdb2dd76","Type":"ContainerDied","Data":"8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b"} Jan 23 17:53:21 crc kubenswrapper[4606]: I0123 17:53:21.984694 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrp" event={"ID":"da831932-52c8-4527-9cf3-3094bdb2dd76","Type":"ContainerStarted","Data":"5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64"} Jan 23 17:53:22 crc kubenswrapper[4606]: I0123 17:53:22.994724 4606 generic.go:334] "Generic (PLEG): container finished" podID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerID="5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64" exitCode=0 Jan 23 17:53:22 crc kubenswrapper[4606]: I0123 17:53:22.994800 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrp" event={"ID":"da831932-52c8-4527-9cf3-3094bdb2dd76","Type":"ContainerDied","Data":"5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64"} Jan 23 17:53:25 crc kubenswrapper[4606]: I0123 17:53:25.013962 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrp" event={"ID":"da831932-52c8-4527-9cf3-3094bdb2dd76","Type":"ContainerStarted","Data":"43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44"} Jan 23 17:53:25 crc kubenswrapper[4606]: I0123 17:53:25.036071 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzgrp" podStartSLOduration=3.780236056 podStartE2EDuration="8.036050772s" podCreationTimestamp="2026-01-23 17:53:17 +0000 UTC" firstStartedPulling="2026-01-23 17:53:19.964492559 +0000 UTC m=+3633.964684955" lastFinishedPulling="2026-01-23 17:53:24.220307265 +0000 UTC m=+3638.220499671" observedRunningTime="2026-01-23 17:53:25.034840803 +0000 UTC m=+3639.035033209" watchObservedRunningTime="2026-01-23 17:53:25.036050772 +0000 UTC m=+3639.036243168" Jan 23 17:53:28 crc kubenswrapper[4606]: I0123 17:53:28.122637 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:28 crc kubenswrapper[4606]: I0123 17:53:28.124080 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:28 crc kubenswrapper[4606]: I0123 17:53:28.181285 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:29 crc kubenswrapper[4606]: I0123 17:53:29.109894 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:29 crc kubenswrapper[4606]: I0123 17:53:29.155227 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzgrp"] Jan 23 17:53:29 crc kubenswrapper[4606]: I0123 17:53:29.934450 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:53:29 crc kubenswrapper[4606]: I0123 17:53:29.934546 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:53:29 crc kubenswrapper[4606]: I0123 17:53:29.934689 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:53:29 crc kubenswrapper[4606]: I0123 17:53:29.935757 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbf8c33a355ea39d0592857bee43fdbd252655274b417873c23d637ab24a621a"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:53:29 crc kubenswrapper[4606]: I0123 17:53:29.935862 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://bbf8c33a355ea39d0592857bee43fdbd252655274b417873c23d637ab24a621a" gracePeriod=600 Jan 23 17:53:30 crc kubenswrapper[4606]: I0123 17:53:30.089338 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="bbf8c33a355ea39d0592857bee43fdbd252655274b417873c23d637ab24a621a" exitCode=0 Jan 23 17:53:30 crc kubenswrapper[4606]: I0123 17:53:30.089525 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"bbf8c33a355ea39d0592857bee43fdbd252655274b417873c23d637ab24a621a"} Jan 23 17:53:30 crc kubenswrapper[4606]: I0123 17:53:30.089840 4606 scope.go:117] "RemoveContainer" containerID="4c4f0c64e879ca1b4dc439da2c1a13bf08272a3825a8fe1999ca7efcc35aff85" Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.105806 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125"} Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.105985 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzgrp" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="registry-server" containerID="cri-o://43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44" gracePeriod=2 Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.791478 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.933802 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-catalog-content\") pod \"da831932-52c8-4527-9cf3-3094bdb2dd76\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.933882 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzpwd\" (UniqueName: \"kubernetes.io/projected/da831932-52c8-4527-9cf3-3094bdb2dd76-kube-api-access-jzpwd\") pod \"da831932-52c8-4527-9cf3-3094bdb2dd76\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.934201 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-utilities\") pod \"da831932-52c8-4527-9cf3-3094bdb2dd76\" (UID: \"da831932-52c8-4527-9cf3-3094bdb2dd76\") " Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.935269 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-utilities" (OuterVolumeSpecName: "utilities") pod "da831932-52c8-4527-9cf3-3094bdb2dd76" (UID: "da831932-52c8-4527-9cf3-3094bdb2dd76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.942124 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da831932-52c8-4527-9cf3-3094bdb2dd76-kube-api-access-jzpwd" (OuterVolumeSpecName: "kube-api-access-jzpwd") pod "da831932-52c8-4527-9cf3-3094bdb2dd76" (UID: "da831932-52c8-4527-9cf3-3094bdb2dd76"). InnerVolumeSpecName "kube-api-access-jzpwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:53:31 crc kubenswrapper[4606]: I0123 17:53:31.987955 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da831932-52c8-4527-9cf3-3094bdb2dd76" (UID: "da831932-52c8-4527-9cf3-3094bdb2dd76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.036080 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzpwd\" (UniqueName: \"kubernetes.io/projected/da831932-52c8-4527-9cf3-3094bdb2dd76-kube-api-access-jzpwd\") on node \"crc\" DevicePath \"\"" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.036116 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.036126 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da831932-52c8-4527-9cf3-3094bdb2dd76-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.116864 4606 generic.go:334] "Generic (PLEG): container finished" podID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerID="43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44" exitCode=0 Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.118431 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrp" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.118713 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrp" event={"ID":"da831932-52c8-4527-9cf3-3094bdb2dd76","Type":"ContainerDied","Data":"43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44"} Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.118765 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrp" event={"ID":"da831932-52c8-4527-9cf3-3094bdb2dd76","Type":"ContainerDied","Data":"08e5300ca7c202cb65c8f30e5518e40db259b9ffcfe1ea04eafafcbd47c775f5"} Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.118784 4606 scope.go:117] "RemoveContainer" containerID="43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.157911 4606 scope.go:117] "RemoveContainer" containerID="5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.164109 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzgrp"] Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.182222 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzgrp"] Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.191768 4606 scope.go:117] "RemoveContainer" containerID="8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.234504 4606 scope.go:117] "RemoveContainer" containerID="43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44" Jan 23 17:53:32 crc kubenswrapper[4606]: E0123 17:53:32.235115 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44\": container with ID starting with 43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44 not found: ID does not exist" containerID="43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.235161 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44"} err="failed to get container status \"43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44\": rpc error: code = NotFound desc = could not find container \"43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44\": container with ID starting with 43e5d826e438f3cfb8ed23f6d3c4030d1ab06c41ab60527a4d04ceeb688a2f44 not found: ID does not exist" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.235191 4606 scope.go:117] "RemoveContainer" containerID="5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64" Jan 23 17:53:32 crc kubenswrapper[4606]: E0123 17:53:32.235647 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64\": container with ID starting with 5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64 not found: ID does not exist" containerID="5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.235672 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64"} err="failed to get container status \"5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64\": rpc error: code = NotFound desc = could not find container \"5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64\": container with ID starting with 5fdf66c0804ff6c7a367aaa66a5c326ea5a1585c0cf5d85c6723579ad824ae64 not found: ID does not exist" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.235688 4606 scope.go:117] "RemoveContainer" containerID="8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b" Jan 23 17:53:32 crc kubenswrapper[4606]: E0123 17:53:32.236153 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b\": container with ID starting with 8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b not found: ID does not exist" containerID="8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.236192 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b"} err="failed to get container status \"8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b\": rpc error: code = NotFound desc = could not find container \"8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b\": container with ID starting with 8abf9972f76dc66b192de3381a78f6f8ee901f6fad52c55bc075fecfee8a1e2b not found: ID does not exist" Jan 23 17:53:32 crc kubenswrapper[4606]: I0123 17:53:32.376128 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" path="/var/lib/kubelet/pods/da831932-52c8-4527-9cf3-3094bdb2dd76/volumes" Jan 23 17:53:48 crc kubenswrapper[4606]: I0123 17:53:48.205645 4606 scope.go:117] "RemoveContainer" containerID="1864fe57d3666f8d9c75c8c3b6b9ea6c86c9baae4c3460edd7e09deae64ec27f" Jan 23 17:53:48 crc kubenswrapper[4606]: I0123 17:53:48.256351 4606 scope.go:117] "RemoveContainer" containerID="1ec0cb677c8806ef73ac2bdd5234a49e0647acd2539f356ea5e7ce2c0c77cdfe" Jan 23 17:53:48 crc kubenswrapper[4606]: I0123 17:53:48.308662 4606 scope.go:117] "RemoveContainer" containerID="09990c44e49fdbdd9676de4badfce9497b06d6b716a3065b16ff4a022df42cca" Jan 23 17:55:59 crc kubenswrapper[4606]: I0123 17:55:59.934763 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:55:59 crc kubenswrapper[4606]: I0123 17:55:59.935420 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:56:29 crc kubenswrapper[4606]: I0123 17:56:29.937727 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:56:29 crc kubenswrapper[4606]: I0123 17:56:29.938192 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:56:59 crc kubenswrapper[4606]: I0123 17:56:59.934345 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 17:56:59 crc kubenswrapper[4606]: I0123 17:56:59.934940 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 17:56:59 crc kubenswrapper[4606]: I0123 17:56:59.934989 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 17:56:59 crc kubenswrapper[4606]: I0123 17:56:59.935843 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 17:56:59 crc kubenswrapper[4606]: I0123 17:56:59.936106 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" gracePeriod=600 Jan 23 17:57:00 crc kubenswrapper[4606]: E0123 17:57:00.074452 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:57:00 crc kubenswrapper[4606]: I0123 17:57:00.405256 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" exitCode=0 Jan 23 17:57:00 crc kubenswrapper[4606]: I0123 17:57:00.405327 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125"} Jan 23 17:57:00 crc kubenswrapper[4606]: I0123 17:57:00.405385 4606 scope.go:117] "RemoveContainer" containerID="bbf8c33a355ea39d0592857bee43fdbd252655274b417873c23d637ab24a621a" Jan 23 17:57:00 crc kubenswrapper[4606]: I0123 17:57:00.406191 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:57:00 crc kubenswrapper[4606]: E0123 17:57:00.406561 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:57:15 crc kubenswrapper[4606]: I0123 17:57:15.365405 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:57:15 crc kubenswrapper[4606]: E0123 17:57:15.367808 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:57:28 crc kubenswrapper[4606]: I0123 17:57:28.366320 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:57:28 crc kubenswrapper[4606]: E0123 17:57:28.367026 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:57:43 crc kubenswrapper[4606]: I0123 17:57:43.365872 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:57:43 crc kubenswrapper[4606]: E0123 17:57:43.366799 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:57:54 crc kubenswrapper[4606]: I0123 17:57:54.366094 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:57:54 crc kubenswrapper[4606]: E0123 17:57:54.366750 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:58:08 crc kubenswrapper[4606]: I0123 17:58:08.367663 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:58:08 crc kubenswrapper[4606]: E0123 17:58:08.368293 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:58:19 crc kubenswrapper[4606]: I0123 17:58:19.366767 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:58:19 crc kubenswrapper[4606]: E0123 17:58:19.367705 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:58:32 crc kubenswrapper[4606]: I0123 17:58:32.366557 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:58:32 crc kubenswrapper[4606]: E0123 17:58:32.368463 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:58:45 crc kubenswrapper[4606]: I0123 17:58:45.366861 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:58:45 crc kubenswrapper[4606]: E0123 17:58:45.368396 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:59:00 crc kubenswrapper[4606]: I0123 17:59:00.365872 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:59:00 crc kubenswrapper[4606]: E0123 17:59:00.366761 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.019025 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xf7hm"] Jan 23 17:59:04 crc kubenswrapper[4606]: E0123 17:59:04.020164 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="extract-utilities" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.020187 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="extract-utilities" Jan 23 17:59:04 crc kubenswrapper[4606]: E0123 17:59:04.020208 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="extract-content" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.020218 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="extract-content" Jan 23 17:59:04 crc kubenswrapper[4606]: E0123 17:59:04.020235 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="registry-server" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.020248 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="registry-server" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.020567 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="da831932-52c8-4527-9cf3-3094bdb2dd76" containerName="registry-server" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.022382 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.035050 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf7hm"] Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.154714 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bc2\" (UniqueName: \"kubernetes.io/projected/92b4974b-9394-42bb-9b23-884cf089b398-kube-api-access-c7bc2\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.155088 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-catalog-content\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.155160 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-utilities\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.257291 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-catalog-content\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.257350 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-utilities\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.257459 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bc2\" (UniqueName: \"kubernetes.io/projected/92b4974b-9394-42bb-9b23-884cf089b398-kube-api-access-c7bc2\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.257805 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-catalog-content\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.257992 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-utilities\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.283312 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bc2\" (UniqueName: \"kubernetes.io/projected/92b4974b-9394-42bb-9b23-884cf089b398-kube-api-access-c7bc2\") pod \"redhat-marketplace-xf7hm\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.363850 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:04 crc kubenswrapper[4606]: I0123 17:59:04.928314 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf7hm"] Jan 23 17:59:05 crc kubenswrapper[4606]: I0123 17:59:05.614185 4606 generic.go:334] "Generic (PLEG): container finished" podID="92b4974b-9394-42bb-9b23-884cf089b398" containerID="b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5" exitCode=0 Jan 23 17:59:05 crc kubenswrapper[4606]: I0123 17:59:05.614293 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf7hm" event={"ID":"92b4974b-9394-42bb-9b23-884cf089b398","Type":"ContainerDied","Data":"b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5"} Jan 23 17:59:05 crc kubenswrapper[4606]: I0123 17:59:05.614426 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf7hm" event={"ID":"92b4974b-9394-42bb-9b23-884cf089b398","Type":"ContainerStarted","Data":"366e15da7a2986aeefa7c37542ce9e272d9533159d1bf6174241c5446785b3ea"} Jan 23 17:59:05 crc kubenswrapper[4606]: I0123 17:59:05.617444 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 17:59:07 crc kubenswrapper[4606]: I0123 17:59:07.634253 4606 generic.go:334] "Generic (PLEG): container finished" podID="92b4974b-9394-42bb-9b23-884cf089b398" containerID="57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634" exitCode=0 Jan 23 17:59:07 crc kubenswrapper[4606]: I0123 17:59:07.634857 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf7hm" event={"ID":"92b4974b-9394-42bb-9b23-884cf089b398","Type":"ContainerDied","Data":"57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634"} Jan 23 17:59:08 crc kubenswrapper[4606]: I0123 17:59:08.646511 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf7hm" event={"ID":"92b4974b-9394-42bb-9b23-884cf089b398","Type":"ContainerStarted","Data":"33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3"} Jan 23 17:59:08 crc kubenswrapper[4606]: I0123 17:59:08.678089 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xf7hm" podStartSLOduration=3.183489206 podStartE2EDuration="5.678069807s" podCreationTimestamp="2026-01-23 17:59:03 +0000 UTC" firstStartedPulling="2026-01-23 17:59:05.617226604 +0000 UTC m=+3979.617419010" lastFinishedPulling="2026-01-23 17:59:08.111807205 +0000 UTC m=+3982.111999611" observedRunningTime="2026-01-23 17:59:08.669169091 +0000 UTC m=+3982.669361497" watchObservedRunningTime="2026-01-23 17:59:08.678069807 +0000 UTC m=+3982.678262213" Jan 23 17:59:12 crc kubenswrapper[4606]: I0123 17:59:12.366441 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:59:12 crc kubenswrapper[4606]: E0123 17:59:12.367353 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:59:14 crc kubenswrapper[4606]: I0123 17:59:14.365265 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:14 crc kubenswrapper[4606]: I0123 17:59:14.367510 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:14 crc kubenswrapper[4606]: I0123 17:59:14.449960 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:14 crc kubenswrapper[4606]: I0123 17:59:14.757765 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:15 crc kubenswrapper[4606]: I0123 17:59:15.766715 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf7hm"] Jan 23 17:59:17 crc kubenswrapper[4606]: I0123 17:59:17.736347 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xf7hm" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="registry-server" containerID="cri-o://33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3" gracePeriod=2 Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.361189 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.439732 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-utilities\") pod \"92b4974b-9394-42bb-9b23-884cf089b398\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.439928 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7bc2\" (UniqueName: \"kubernetes.io/projected/92b4974b-9394-42bb-9b23-884cf089b398-kube-api-access-c7bc2\") pod \"92b4974b-9394-42bb-9b23-884cf089b398\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.440034 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-catalog-content\") pod \"92b4974b-9394-42bb-9b23-884cf089b398\" (UID: \"92b4974b-9394-42bb-9b23-884cf089b398\") " Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.441074 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-utilities" (OuterVolumeSpecName: "utilities") pod "92b4974b-9394-42bb-9b23-884cf089b398" (UID: "92b4974b-9394-42bb-9b23-884cf089b398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.462721 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b4974b-9394-42bb-9b23-884cf089b398-kube-api-access-c7bc2" (OuterVolumeSpecName: "kube-api-access-c7bc2") pod "92b4974b-9394-42bb-9b23-884cf089b398" (UID: "92b4974b-9394-42bb-9b23-884cf089b398"). InnerVolumeSpecName "kube-api-access-c7bc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.466959 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92b4974b-9394-42bb-9b23-884cf089b398" (UID: "92b4974b-9394-42bb-9b23-884cf089b398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.542131 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7bc2\" (UniqueName: \"kubernetes.io/projected/92b4974b-9394-42bb-9b23-884cf089b398-kube-api-access-c7bc2\") on node \"crc\" DevicePath \"\"" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.542162 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.542173 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b4974b-9394-42bb-9b23-884cf089b398-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.749276 4606 generic.go:334] "Generic (PLEG): container finished" podID="92b4974b-9394-42bb-9b23-884cf089b398" containerID="33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3" exitCode=0 Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.749353 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf7hm" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.749396 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf7hm" event={"ID":"92b4974b-9394-42bb-9b23-884cf089b398","Type":"ContainerDied","Data":"33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3"} Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.750931 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf7hm" event={"ID":"92b4974b-9394-42bb-9b23-884cf089b398","Type":"ContainerDied","Data":"366e15da7a2986aeefa7c37542ce9e272d9533159d1bf6174241c5446785b3ea"} Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.750957 4606 scope.go:117] "RemoveContainer" containerID="33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.788987 4606 scope.go:117] "RemoveContainer" containerID="57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.797174 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf7hm"] Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.805376 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf7hm"] Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.825068 4606 scope.go:117] "RemoveContainer" containerID="b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.899512 4606 scope.go:117] "RemoveContainer" containerID="33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3" Jan 23 17:59:18 crc kubenswrapper[4606]: E0123 17:59:18.901502 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3\": container with ID starting with 33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3 not found: ID does not exist" containerID="33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.901554 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3"} err="failed to get container status \"33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3\": rpc error: code = NotFound desc = could not find container \"33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3\": container with ID starting with 33ae59b4bab29e4fe23b902592a578cd1ad9424282b000bf4c16eef26def95f3 not found: ID does not exist" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.901580 4606 scope.go:117] "RemoveContainer" containerID="57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634" Jan 23 17:59:18 crc kubenswrapper[4606]: E0123 17:59:18.901952 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634\": container with ID starting with 57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634 not found: ID does not exist" containerID="57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.901993 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634"} err="failed to get container status \"57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634\": rpc error: code = NotFound desc = could not find container \"57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634\": container with ID starting with 57434c01a544ba951b3d3067df03cdffb5762bfb3b129d76443e6b5509e09634 not found: ID does not exist" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.902020 4606 scope.go:117] "RemoveContainer" containerID="b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5" Jan 23 17:59:18 crc kubenswrapper[4606]: E0123 17:59:18.902537 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5\": container with ID starting with b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5 not found: ID does not exist" containerID="b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5" Jan 23 17:59:18 crc kubenswrapper[4606]: I0123 17:59:18.902573 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5"} err="failed to get container status \"b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5\": rpc error: code = NotFound desc = could not find container \"b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5\": container with ID starting with b45e4207cf1e5ccacb5db0a4c95268462594491d29b4100b186f8182d9edb3c5 not found: ID does not exist" Jan 23 17:59:20 crc kubenswrapper[4606]: I0123 17:59:20.376428 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b4974b-9394-42bb-9b23-884cf089b398" path="/var/lib/kubelet/pods/92b4974b-9394-42bb-9b23-884cf089b398/volumes" Jan 23 17:59:24 crc kubenswrapper[4606]: I0123 17:59:24.368295 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:59:24 crc kubenswrapper[4606]: E0123 17:59:24.368734 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:59:39 crc kubenswrapper[4606]: I0123 17:59:39.366344 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:59:39 crc kubenswrapper[4606]: E0123 17:59:39.367182 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 17:59:52 crc kubenswrapper[4606]: I0123 17:59:52.366129 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 17:59:52 crc kubenswrapper[4606]: E0123 17:59:52.367890 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.155715 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz"] Jan 23 18:00:00 crc kubenswrapper[4606]: E0123 18:00:00.158205 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="extract-content" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.158235 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="extract-content" Jan 23 18:00:00 crc kubenswrapper[4606]: E0123 18:00:00.158266 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="registry-server" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.158275 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="registry-server" Jan 23 18:00:00 crc kubenswrapper[4606]: E0123 18:00:00.158306 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="extract-utilities" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.158316 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="extract-utilities" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.158684 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b4974b-9394-42bb-9b23-884cf089b398" containerName="registry-server" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.159447 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.165547 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.165600 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.171814 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz"] Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.308552 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwll4\" (UniqueName: \"kubernetes.io/projected/b7a82142-f0bd-4dd7-84fc-2df975752aad-kube-api-access-mwll4\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.308936 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7a82142-f0bd-4dd7-84fc-2df975752aad-secret-volume\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.308972 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a82142-f0bd-4dd7-84fc-2df975752aad-config-volume\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.410909 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwll4\" (UniqueName: \"kubernetes.io/projected/b7a82142-f0bd-4dd7-84fc-2df975752aad-kube-api-access-mwll4\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.411022 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7a82142-f0bd-4dd7-84fc-2df975752aad-secret-volume\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.411074 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a82142-f0bd-4dd7-84fc-2df975752aad-config-volume\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.412015 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a82142-f0bd-4dd7-84fc-2df975752aad-config-volume\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.418101 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7a82142-f0bd-4dd7-84fc-2df975752aad-secret-volume\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.431494 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwll4\" (UniqueName: \"kubernetes.io/projected/b7a82142-f0bd-4dd7-84fc-2df975752aad-kube-api-access-mwll4\") pod \"collect-profiles-29486520-gz8bz\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.491033 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:00 crc kubenswrapper[4606]: I0123 18:00:00.938032 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz"] Jan 23 18:00:01 crc kubenswrapper[4606]: I0123 18:00:01.171219 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" event={"ID":"b7a82142-f0bd-4dd7-84fc-2df975752aad","Type":"ContainerStarted","Data":"300ba3ba5c505968d1e73a42716a3cbf37ce4b7bdaa203f712797ec7ca9ed310"} Jan 23 18:00:01 crc kubenswrapper[4606]: I0123 18:00:01.171504 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" event={"ID":"b7a82142-f0bd-4dd7-84fc-2df975752aad","Type":"ContainerStarted","Data":"834d808b0e812805f7ce859063316526575a4d1dc4c2730993b3c05275f873ef"} Jan 23 18:00:01 crc kubenswrapper[4606]: I0123 18:00:01.190377 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" podStartSLOduration=1.190360324 podStartE2EDuration="1.190360324s" podCreationTimestamp="2026-01-23 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:00:01.185487295 +0000 UTC m=+4035.185679701" watchObservedRunningTime="2026-01-23 18:00:01.190360324 +0000 UTC m=+4035.190552730" Jan 23 18:00:02 crc kubenswrapper[4606]: I0123 18:00:02.182432 4606 generic.go:334] "Generic (PLEG): container finished" podID="b7a82142-f0bd-4dd7-84fc-2df975752aad" containerID="300ba3ba5c505968d1e73a42716a3cbf37ce4b7bdaa203f712797ec7ca9ed310" exitCode=0 Jan 23 18:00:02 crc kubenswrapper[4606]: I0123 18:00:02.182639 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" event={"ID":"b7a82142-f0bd-4dd7-84fc-2df975752aad","Type":"ContainerDied","Data":"300ba3ba5c505968d1e73a42716a3cbf37ce4b7bdaa203f712797ec7ca9ed310"} Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.708158 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.881098 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwll4\" (UniqueName: \"kubernetes.io/projected/b7a82142-f0bd-4dd7-84fc-2df975752aad-kube-api-access-mwll4\") pod \"b7a82142-f0bd-4dd7-84fc-2df975752aad\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.881289 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7a82142-f0bd-4dd7-84fc-2df975752aad-secret-volume\") pod \"b7a82142-f0bd-4dd7-84fc-2df975752aad\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.881330 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a82142-f0bd-4dd7-84fc-2df975752aad-config-volume\") pod \"b7a82142-f0bd-4dd7-84fc-2df975752aad\" (UID: \"b7a82142-f0bd-4dd7-84fc-2df975752aad\") " Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.882346 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a82142-f0bd-4dd7-84fc-2df975752aad-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7a82142-f0bd-4dd7-84fc-2df975752aad" (UID: "b7a82142-f0bd-4dd7-84fc-2df975752aad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.888863 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a82142-f0bd-4dd7-84fc-2df975752aad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7a82142-f0bd-4dd7-84fc-2df975752aad" (UID: "b7a82142-f0bd-4dd7-84fc-2df975752aad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.890390 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a82142-f0bd-4dd7-84fc-2df975752aad-kube-api-access-mwll4" (OuterVolumeSpecName: "kube-api-access-mwll4") pod "b7a82142-f0bd-4dd7-84fc-2df975752aad" (UID: "b7a82142-f0bd-4dd7-84fc-2df975752aad"). InnerVolumeSpecName "kube-api-access-mwll4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.984676 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7a82142-f0bd-4dd7-84fc-2df975752aad-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.984742 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7a82142-f0bd-4dd7-84fc-2df975752aad-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:03 crc kubenswrapper[4606]: I0123 18:00:03.984758 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwll4\" (UniqueName: \"kubernetes.io/projected/b7a82142-f0bd-4dd7-84fc-2df975752aad-kube-api-access-mwll4\") on node \"crc\" DevicePath \"\"" Jan 23 18:00:04 crc kubenswrapper[4606]: I0123 18:00:04.200814 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" event={"ID":"b7a82142-f0bd-4dd7-84fc-2df975752aad","Type":"ContainerDied","Data":"834d808b0e812805f7ce859063316526575a4d1dc4c2730993b3c05275f873ef"} Jan 23 18:00:04 crc kubenswrapper[4606]: I0123 18:00:04.200853 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="834d808b0e812805f7ce859063316526575a4d1dc4c2730993b3c05275f873ef" Jan 23 18:00:04 crc kubenswrapper[4606]: I0123 18:00:04.200894 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-gz8bz" Jan 23 18:00:04 crc kubenswrapper[4606]: I0123 18:00:04.272904 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv"] Jan 23 18:00:04 crc kubenswrapper[4606]: I0123 18:00:04.283400 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486475-859tv"] Jan 23 18:00:04 crc kubenswrapper[4606]: I0123 18:00:04.379280 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd7093e-6417-481c-886a-aeaa72d6e538" path="/var/lib/kubelet/pods/2fd7093e-6417-481c-886a-aeaa72d6e538/volumes" Jan 23 18:00:07 crc kubenswrapper[4606]: I0123 18:00:07.365675 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:00:07 crc kubenswrapper[4606]: E0123 18:00:07.366430 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:00:19 crc kubenswrapper[4606]: I0123 18:00:19.366316 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:00:19 crc kubenswrapper[4606]: E0123 18:00:19.367122 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:00:33 crc kubenswrapper[4606]: I0123 18:00:33.366254 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:00:33 crc kubenswrapper[4606]: E0123 18:00:33.367022 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:00:46 crc kubenswrapper[4606]: I0123 18:00:46.371416 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:00:46 crc kubenswrapper[4606]: E0123 18:00:46.372921 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:00:48 crc kubenswrapper[4606]: I0123 18:00:48.851750 4606 scope.go:117] "RemoveContainer" containerID="116d4aa9596f3d7e135b4a0104ce8c336f5cf63bbabd2126789e258b69d58f12" Jan 23 18:00:59 crc kubenswrapper[4606]: I0123 18:00:59.365941 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:00:59 crc kubenswrapper[4606]: E0123 18:00:59.367066 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.166531 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486521-pdkkp"] Jan 23 18:01:00 crc kubenswrapper[4606]: E0123 18:01:00.167249 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a82142-f0bd-4dd7-84fc-2df975752aad" containerName="collect-profiles" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.167277 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a82142-f0bd-4dd7-84fc-2df975752aad" containerName="collect-profiles" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.167552 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a82142-f0bd-4dd7-84fc-2df975752aad" containerName="collect-profiles" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.169513 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.187045 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486521-pdkkp"] Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.250220 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-fernet-keys\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.250399 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-config-data\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.250879 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7mr\" (UniqueName: \"kubernetes.io/projected/15d6fa54-20e5-469e-a9c9-cbd2352ca508-kube-api-access-hs7mr\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.251054 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-combined-ca-bundle\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.354132 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-combined-ca-bundle\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.354221 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-fernet-keys\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.354283 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-config-data\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.354372 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7mr\" (UniqueName: \"kubernetes.io/projected/15d6fa54-20e5-469e-a9c9-cbd2352ca508-kube-api-access-hs7mr\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.360999 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-config-data\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.361368 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-fernet-keys\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.363479 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-combined-ca-bundle\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.377844 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7mr\" (UniqueName: \"kubernetes.io/projected/15d6fa54-20e5-469e-a9c9-cbd2352ca508-kube-api-access-hs7mr\") pod \"keystone-cron-29486521-pdkkp\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.499124 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:00 crc kubenswrapper[4606]: I0123 18:01:00.975037 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486521-pdkkp"] Jan 23 18:01:00 crc kubenswrapper[4606]: W0123 18:01:00.989844 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15d6fa54_20e5_469e_a9c9_cbd2352ca508.slice/crio-6b1958cceddd5918a6a7f12317503a7a24fa4aecd25612700bdc19888cda5ea3 WatchSource:0}: Error finding container 6b1958cceddd5918a6a7f12317503a7a24fa4aecd25612700bdc19888cda5ea3: Status 404 returned error can't find the container with id 6b1958cceddd5918a6a7f12317503a7a24fa4aecd25612700bdc19888cda5ea3 Jan 23 18:01:01 crc kubenswrapper[4606]: I0123 18:01:01.722252 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-pdkkp" event={"ID":"15d6fa54-20e5-469e-a9c9-cbd2352ca508","Type":"ContainerStarted","Data":"25748fe5b6f38e6e18e004e5f25683f5329d0e58a2c7e1eeacc9c157bb404361"} Jan 23 18:01:01 crc kubenswrapper[4606]: I0123 18:01:01.722716 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-pdkkp" event={"ID":"15d6fa54-20e5-469e-a9c9-cbd2352ca508","Type":"ContainerStarted","Data":"6b1958cceddd5918a6a7f12317503a7a24fa4aecd25612700bdc19888cda5ea3"} Jan 23 18:01:01 crc kubenswrapper[4606]: I0123 18:01:01.765171 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486521-pdkkp" podStartSLOduration=1.765149571 podStartE2EDuration="1.765149571s" podCreationTimestamp="2026-01-23 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:01:01.748407693 +0000 UTC m=+4095.748600109" watchObservedRunningTime="2026-01-23 18:01:01.765149571 +0000 UTC m=+4095.765341977" Jan 23 18:01:06 crc kubenswrapper[4606]: I0123 18:01:06.769039 4606 generic.go:334] "Generic (PLEG): container finished" podID="15d6fa54-20e5-469e-a9c9-cbd2352ca508" containerID="25748fe5b6f38e6e18e004e5f25683f5329d0e58a2c7e1eeacc9c157bb404361" exitCode=0 Jan 23 18:01:06 crc kubenswrapper[4606]: I0123 18:01:06.769120 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-pdkkp" event={"ID":"15d6fa54-20e5-469e-a9c9-cbd2352ca508","Type":"ContainerDied","Data":"25748fe5b6f38e6e18e004e5f25683f5329d0e58a2c7e1eeacc9c157bb404361"} Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.324553 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.421267 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs7mr\" (UniqueName: \"kubernetes.io/projected/15d6fa54-20e5-469e-a9c9-cbd2352ca508-kube-api-access-hs7mr\") pod \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.421323 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-combined-ca-bundle\") pod \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.421392 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-fernet-keys\") pod \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.421489 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-config-data\") pod \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\" (UID: \"15d6fa54-20e5-469e-a9c9-cbd2352ca508\") " Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.432904 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "15d6fa54-20e5-469e-a9c9-cbd2352ca508" (UID: "15d6fa54-20e5-469e-a9c9-cbd2352ca508"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.433288 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d6fa54-20e5-469e-a9c9-cbd2352ca508-kube-api-access-hs7mr" (OuterVolumeSpecName: "kube-api-access-hs7mr") pod "15d6fa54-20e5-469e-a9c9-cbd2352ca508" (UID: "15d6fa54-20e5-469e-a9c9-cbd2352ca508"). InnerVolumeSpecName "kube-api-access-hs7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.455690 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15d6fa54-20e5-469e-a9c9-cbd2352ca508" (UID: "15d6fa54-20e5-469e-a9c9-cbd2352ca508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.481247 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-config-data" (OuterVolumeSpecName: "config-data") pod "15d6fa54-20e5-469e-a9c9-cbd2352ca508" (UID: "15d6fa54-20e5-469e-a9c9-cbd2352ca508"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.523710 4606 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.523748 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.523761 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs7mr\" (UniqueName: \"kubernetes.io/projected/15d6fa54-20e5-469e-a9c9-cbd2352ca508-kube-api-access-hs7mr\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.523771 4606 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d6fa54-20e5-469e-a9c9-cbd2352ca508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.795553 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486521-pdkkp" event={"ID":"15d6fa54-20e5-469e-a9c9-cbd2352ca508","Type":"ContainerDied","Data":"6b1958cceddd5918a6a7f12317503a7a24fa4aecd25612700bdc19888cda5ea3"} Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.795635 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b1958cceddd5918a6a7f12317503a7a24fa4aecd25612700bdc19888cda5ea3" Jan 23 18:01:08 crc kubenswrapper[4606]: I0123 18:01:08.795698 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486521-pdkkp" Jan 23 18:01:14 crc kubenswrapper[4606]: I0123 18:01:14.366390 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:01:14 crc kubenswrapper[4606]: E0123 18:01:14.366958 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:01:25 crc kubenswrapper[4606]: I0123 18:01:25.367429 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:01:25 crc kubenswrapper[4606]: E0123 18:01:25.368266 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:01:40 crc kubenswrapper[4606]: I0123 18:01:40.366320 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:01:40 crc kubenswrapper[4606]: E0123 18:01:40.367086 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:01:43 crc kubenswrapper[4606]: I0123 18:01:43.621922 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-n8gtl" podUID="e53998dd-17ef-4f3f-989b-2304bc9357aa" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 23 18:01:52 crc kubenswrapper[4606]: I0123 18:01:52.365996 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:01:52 crc kubenswrapper[4606]: E0123 18:01:52.366706 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:02:07 crc kubenswrapper[4606]: I0123 18:02:07.366642 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:02:08 crc kubenswrapper[4606]: I0123 18:02:08.333216 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"32a42a51a172c9749ed45ab6af919bb2a8f13883be55d8eda017ba07de4ecd43"} Jan 23 18:02:13 crc kubenswrapper[4606]: I0123 18:02:13.268364 4606 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6786d7bbc5-25lfl" podUID="10c3e688-60f4-483e-962b-9e46bd25cfb7" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.020011 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8lhcx"] Jan 23 18:02:23 crc kubenswrapper[4606]: E0123 18:02:23.020987 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d6fa54-20e5-469e-a9c9-cbd2352ca508" containerName="keystone-cron" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.021001 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d6fa54-20e5-469e-a9c9-cbd2352ca508" containerName="keystone-cron" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.021187 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d6fa54-20e5-469e-a9c9-cbd2352ca508" containerName="keystone-cron" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.022531 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.037060 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lhcx"] Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.095729 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tv9\" (UniqueName: \"kubernetes.io/projected/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-kube-api-access-98tv9\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.095875 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-catalog-content\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.095956 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-utilities\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.198046 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98tv9\" (UniqueName: \"kubernetes.io/projected/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-kube-api-access-98tv9\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.198142 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-catalog-content\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.198197 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-utilities\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.198626 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-utilities\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.198649 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-catalog-content\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.316322 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98tv9\" (UniqueName: \"kubernetes.io/projected/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-kube-api-access-98tv9\") pod \"community-operators-8lhcx\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.340116 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:23 crc kubenswrapper[4606]: I0123 18:02:23.967930 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8lhcx"] Jan 23 18:02:24 crc kubenswrapper[4606]: I0123 18:02:24.481258 4606 generic.go:334] "Generic (PLEG): container finished" podID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerID="1e77b5c5718263be33efe015ae71d921f65a259804b35f0cc8fef19746e24e55" exitCode=0 Jan 23 18:02:24 crc kubenswrapper[4606]: I0123 18:02:24.481683 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lhcx" event={"ID":"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b","Type":"ContainerDied","Data":"1e77b5c5718263be33efe015ae71d921f65a259804b35f0cc8fef19746e24e55"} Jan 23 18:02:24 crc kubenswrapper[4606]: I0123 18:02:24.481732 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lhcx" event={"ID":"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b","Type":"ContainerStarted","Data":"c473f2a7cbf5309205d17bd93412aeb40734dddae840c1689d2899dee522bcda"} Jan 23 18:02:25 crc kubenswrapper[4606]: I0123 18:02:25.492202 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lhcx" event={"ID":"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b","Type":"ContainerStarted","Data":"817f28982959f5d3522138540b2e5fb7adc67db45ede83ff678792d0bbabda9a"} Jan 23 18:02:26 crc kubenswrapper[4606]: I0123 18:02:26.501215 4606 generic.go:334] "Generic (PLEG): container finished" podID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerID="817f28982959f5d3522138540b2e5fb7adc67db45ede83ff678792d0bbabda9a" exitCode=0 Jan 23 18:02:26 crc kubenswrapper[4606]: I0123 18:02:26.501259 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lhcx" event={"ID":"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b","Type":"ContainerDied","Data":"817f28982959f5d3522138540b2e5fb7adc67db45ede83ff678792d0bbabda9a"} Jan 23 18:02:27 crc kubenswrapper[4606]: I0123 18:02:27.515817 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lhcx" event={"ID":"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b","Type":"ContainerStarted","Data":"0752ed7badf008fb6cf7830dd6af98ec32f1f601f9def45f2bf8f5ebd894c2fb"} Jan 23 18:02:27 crc kubenswrapper[4606]: I0123 18:02:27.535425 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8lhcx" podStartSLOduration=3.106517586 podStartE2EDuration="5.535400866s" podCreationTimestamp="2026-01-23 18:02:22 +0000 UTC" firstStartedPulling="2026-01-23 18:02:24.483515281 +0000 UTC m=+4178.483707687" lastFinishedPulling="2026-01-23 18:02:26.912398561 +0000 UTC m=+4180.912590967" observedRunningTime="2026-01-23 18:02:27.530281711 +0000 UTC m=+4181.530474127" watchObservedRunningTime="2026-01-23 18:02:27.535400866 +0000 UTC m=+4181.535593262" Jan 23 18:02:33 crc kubenswrapper[4606]: I0123 18:02:33.340756 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:33 crc kubenswrapper[4606]: I0123 18:02:33.342418 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:33 crc kubenswrapper[4606]: I0123 18:02:33.387893 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:33 crc kubenswrapper[4606]: I0123 18:02:33.606999 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:33 crc kubenswrapper[4606]: I0123 18:02:33.655223 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lhcx"] Jan 23 18:02:35 crc kubenswrapper[4606]: I0123 18:02:35.580957 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8lhcx" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="registry-server" containerID="cri-o://0752ed7badf008fb6cf7830dd6af98ec32f1f601f9def45f2bf8f5ebd894c2fb" gracePeriod=2 Jan 23 18:02:36 crc kubenswrapper[4606]: I0123 18:02:36.596142 4606 generic.go:334] "Generic (PLEG): container finished" podID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerID="0752ed7badf008fb6cf7830dd6af98ec32f1f601f9def45f2bf8f5ebd894c2fb" exitCode=0 Jan 23 18:02:36 crc kubenswrapper[4606]: I0123 18:02:36.596218 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lhcx" event={"ID":"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b","Type":"ContainerDied","Data":"0752ed7badf008fb6cf7830dd6af98ec32f1f601f9def45f2bf8f5ebd894c2fb"} Jan 23 18:02:36 crc kubenswrapper[4606]: I0123 18:02:36.933173 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.034885 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-utilities\") pod \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.034959 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-catalog-content\") pod \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.034995 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98tv9\" (UniqueName: \"kubernetes.io/projected/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-kube-api-access-98tv9\") pod \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\" (UID: \"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b\") " Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.036388 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-utilities" (OuterVolumeSpecName: "utilities") pod "9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" (UID: "9fbba460-8984-4c30-b9aa-4ca2fcdabd6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.057331 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-kube-api-access-98tv9" (OuterVolumeSpecName: "kube-api-access-98tv9") pod "9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" (UID: "9fbba460-8984-4c30-b9aa-4ca2fcdabd6b"). InnerVolumeSpecName "kube-api-access-98tv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.114800 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" (UID: "9fbba460-8984-4c30-b9aa-4ca2fcdabd6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.138198 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.138239 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.138256 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98tv9\" (UniqueName: \"kubernetes.io/projected/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b-kube-api-access-98tv9\") on node \"crc\" DevicePath \"\"" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.609324 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8lhcx" event={"ID":"9fbba460-8984-4c30-b9aa-4ca2fcdabd6b","Type":"ContainerDied","Data":"c473f2a7cbf5309205d17bd93412aeb40734dddae840c1689d2899dee522bcda"} Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.609398 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8lhcx" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.610224 4606 scope.go:117] "RemoveContainer" containerID="0752ed7badf008fb6cf7830dd6af98ec32f1f601f9def45f2bf8f5ebd894c2fb" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.649577 4606 scope.go:117] "RemoveContainer" containerID="817f28982959f5d3522138540b2e5fb7adc67db45ede83ff678792d0bbabda9a" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.690896 4606 scope.go:117] "RemoveContainer" containerID="1e77b5c5718263be33efe015ae71d921f65a259804b35f0cc8fef19746e24e55" Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.695669 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8lhcx"] Jan 23 18:02:37 crc kubenswrapper[4606]: I0123 18:02:37.710566 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8lhcx"] Jan 23 18:02:38 crc kubenswrapper[4606]: I0123 18:02:38.390013 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" path="/var/lib/kubelet/pods/9fbba460-8984-4c30-b9aa-4ca2fcdabd6b/volumes" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.567747 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v6fts"] Jan 23 18:03:22 crc kubenswrapper[4606]: E0123 18:03:22.568753 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="extract-content" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.568767 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="extract-content" Jan 23 18:03:22 crc kubenswrapper[4606]: E0123 18:03:22.568811 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="extract-utilities" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.568820 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="extract-utilities" Jan 23 18:03:22 crc kubenswrapper[4606]: E0123 18:03:22.568835 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="registry-server" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.568844 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="registry-server" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.569053 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbba460-8984-4c30-b9aa-4ca2fcdabd6b" containerName="registry-server" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.570745 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.583441 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6fts"] Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.690685 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-utilities\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.690767 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-catalog-content\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.690810 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r4cp\" (UniqueName: \"kubernetes.io/projected/31964617-33b7-46ce-9bf6-d59425ff0087-kube-api-access-8r4cp\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.792192 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-utilities\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.792520 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-catalog-content\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.792568 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r4cp\" (UniqueName: \"kubernetes.io/projected/31964617-33b7-46ce-9bf6-d59425ff0087-kube-api-access-8r4cp\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.792818 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-utilities\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.792960 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-catalog-content\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.811637 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r4cp\" (UniqueName: \"kubernetes.io/projected/31964617-33b7-46ce-9bf6-d59425ff0087-kube-api-access-8r4cp\") pod \"redhat-operators-v6fts\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:22 crc kubenswrapper[4606]: I0123 18:03:22.897684 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:23 crc kubenswrapper[4606]: I0123 18:03:23.461182 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6fts"] Jan 23 18:03:24 crc kubenswrapper[4606]: I0123 18:03:24.088349 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6fts" event={"ID":"31964617-33b7-46ce-9bf6-d59425ff0087","Type":"ContainerStarted","Data":"c4df95ad5fa07968cd4c559d2feb4bf581caaea2b9c52526699ed578780d9aac"} Jan 23 18:03:25 crc kubenswrapper[4606]: I0123 18:03:25.101686 4606 generic.go:334] "Generic (PLEG): container finished" podID="31964617-33b7-46ce-9bf6-d59425ff0087" containerID="27c26101c9f852025f5f75a956ad3699555672f64d1f95aa774653366d0b5072" exitCode=0 Jan 23 18:03:25 crc kubenswrapper[4606]: I0123 18:03:25.101830 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6fts" event={"ID":"31964617-33b7-46ce-9bf6-d59425ff0087","Type":"ContainerDied","Data":"27c26101c9f852025f5f75a956ad3699555672f64d1f95aa774653366d0b5072"} Jan 23 18:03:26 crc kubenswrapper[4606]: I0123 18:03:26.117516 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6fts" event={"ID":"31964617-33b7-46ce-9bf6-d59425ff0087","Type":"ContainerStarted","Data":"8f81fa7c2ad35ae4111eaaddfd05f22112d9a77ee017fb8c09456a6c4e75fc2e"} Jan 23 18:03:30 crc kubenswrapper[4606]: E0123 18:03:30.930958 4606 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31964617_33b7_46ce_9bf6_d59425ff0087.slice/crio-conmon-8f81fa7c2ad35ae4111eaaddfd05f22112d9a77ee017fb8c09456a6c4e75fc2e.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:03:31 crc kubenswrapper[4606]: I0123 18:03:31.163993 4606 generic.go:334] "Generic (PLEG): container finished" podID="31964617-33b7-46ce-9bf6-d59425ff0087" containerID="8f81fa7c2ad35ae4111eaaddfd05f22112d9a77ee017fb8c09456a6c4e75fc2e" exitCode=0 Jan 23 18:03:31 crc kubenswrapper[4606]: I0123 18:03:31.164051 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6fts" event={"ID":"31964617-33b7-46ce-9bf6-d59425ff0087","Type":"ContainerDied","Data":"8f81fa7c2ad35ae4111eaaddfd05f22112d9a77ee017fb8c09456a6c4e75fc2e"} Jan 23 18:03:32 crc kubenswrapper[4606]: I0123 18:03:32.175695 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6fts" event={"ID":"31964617-33b7-46ce-9bf6-d59425ff0087","Type":"ContainerStarted","Data":"7f1ae3dac09dc83f9f83737f4c399bb657a15e1f53a32fd52661c8aa337d6187"} Jan 23 18:03:32 crc kubenswrapper[4606]: I0123 18:03:32.206086 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v6fts" podStartSLOduration=3.414819616 podStartE2EDuration="10.206065931s" podCreationTimestamp="2026-01-23 18:03:22 +0000 UTC" firstStartedPulling="2026-01-23 18:03:25.103447237 +0000 UTC m=+4239.103639643" lastFinishedPulling="2026-01-23 18:03:31.894693552 +0000 UTC m=+4245.894885958" observedRunningTime="2026-01-23 18:03:32.198099288 +0000 UTC m=+4246.198291694" watchObservedRunningTime="2026-01-23 18:03:32.206065931 +0000 UTC m=+4246.206258337" Jan 23 18:03:32 crc kubenswrapper[4606]: I0123 18:03:32.902984 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:32 crc kubenswrapper[4606]: I0123 18:03:32.903035 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:33 crc kubenswrapper[4606]: I0123 18:03:33.955540 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6fts" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="registry-server" probeResult="failure" output=< Jan 23 18:03:33 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 18:03:33 crc kubenswrapper[4606]: > Jan 23 18:03:43 crc kubenswrapper[4606]: I0123 18:03:43.960964 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6fts" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="registry-server" probeResult="failure" output=< Jan 23 18:03:43 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 18:03:43 crc kubenswrapper[4606]: > Jan 23 18:03:52 crc kubenswrapper[4606]: I0123 18:03:52.950817 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:53 crc kubenswrapper[4606]: I0123 18:03:53.001868 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:53 crc kubenswrapper[4606]: I0123 18:03:53.777037 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6fts"] Jan 23 18:03:54 crc kubenswrapper[4606]: I0123 18:03:54.394252 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v6fts" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="registry-server" containerID="cri-o://7f1ae3dac09dc83f9f83737f4c399bb657a15e1f53a32fd52661c8aa337d6187" gracePeriod=2 Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.407188 4606 generic.go:334] "Generic (PLEG): container finished" podID="31964617-33b7-46ce-9bf6-d59425ff0087" containerID="7f1ae3dac09dc83f9f83737f4c399bb657a15e1f53a32fd52661c8aa337d6187" exitCode=0 Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.407256 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6fts" event={"ID":"31964617-33b7-46ce-9bf6-d59425ff0087","Type":"ContainerDied","Data":"7f1ae3dac09dc83f9f83737f4c399bb657a15e1f53a32fd52661c8aa337d6187"} Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.637909 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.774404 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r4cp\" (UniqueName: \"kubernetes.io/projected/31964617-33b7-46ce-9bf6-d59425ff0087-kube-api-access-8r4cp\") pod \"31964617-33b7-46ce-9bf6-d59425ff0087\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.774603 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-catalog-content\") pod \"31964617-33b7-46ce-9bf6-d59425ff0087\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.774674 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-utilities\") pod \"31964617-33b7-46ce-9bf6-d59425ff0087\" (UID: \"31964617-33b7-46ce-9bf6-d59425ff0087\") " Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.775514 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-utilities" (OuterVolumeSpecName: "utilities") pod "31964617-33b7-46ce-9bf6-d59425ff0087" (UID: "31964617-33b7-46ce-9bf6-d59425ff0087"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.783802 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31964617-33b7-46ce-9bf6-d59425ff0087-kube-api-access-8r4cp" (OuterVolumeSpecName: "kube-api-access-8r4cp") pod "31964617-33b7-46ce-9bf6-d59425ff0087" (UID: "31964617-33b7-46ce-9bf6-d59425ff0087"). InnerVolumeSpecName "kube-api-access-8r4cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.877187 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.877225 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r4cp\" (UniqueName: \"kubernetes.io/projected/31964617-33b7-46ce-9bf6-d59425ff0087-kube-api-access-8r4cp\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.900826 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31964617-33b7-46ce-9bf6-d59425ff0087" (UID: "31964617-33b7-46ce-9bf6-d59425ff0087"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:03:55 crc kubenswrapper[4606]: I0123 18:03:55.979287 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31964617-33b7-46ce-9bf6-d59425ff0087-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:03:56 crc kubenswrapper[4606]: I0123 18:03:56.417839 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6fts" event={"ID":"31964617-33b7-46ce-9bf6-d59425ff0087","Type":"ContainerDied","Data":"c4df95ad5fa07968cd4c559d2feb4bf581caaea2b9c52526699ed578780d9aac"} Jan 23 18:03:56 crc kubenswrapper[4606]: I0123 18:03:56.418169 4606 scope.go:117] "RemoveContainer" containerID="7f1ae3dac09dc83f9f83737f4c399bb657a15e1f53a32fd52661c8aa337d6187" Jan 23 18:03:56 crc kubenswrapper[4606]: I0123 18:03:56.417929 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6fts" Jan 23 18:03:56 crc kubenswrapper[4606]: I0123 18:03:56.454992 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6fts"] Jan 23 18:03:56 crc kubenswrapper[4606]: I0123 18:03:56.458999 4606 scope.go:117] "RemoveContainer" containerID="8f81fa7c2ad35ae4111eaaddfd05f22112d9a77ee017fb8c09456a6c4e75fc2e" Jan 23 18:03:56 crc kubenswrapper[4606]: I0123 18:03:56.468422 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v6fts"] Jan 23 18:03:56 crc kubenswrapper[4606]: I0123 18:03:56.494312 4606 scope.go:117] "RemoveContainer" containerID="27c26101c9f852025f5f75a956ad3699555672f64d1f95aa774653366d0b5072" Jan 23 18:03:58 crc kubenswrapper[4606]: I0123 18:03:58.377241 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" path="/var/lib/kubelet/pods/31964617-33b7-46ce-9bf6-d59425ff0087/volumes" Jan 23 18:04:29 crc kubenswrapper[4606]: I0123 18:04:29.933533 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:04:29 crc kubenswrapper[4606]: I0123 18:04:29.934470 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:04:59 crc kubenswrapper[4606]: I0123 18:04:59.933650 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:04:59 crc kubenswrapper[4606]: I0123 18:04:59.935195 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:05:29 crc kubenswrapper[4606]: I0123 18:05:29.933669 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:05:29 crc kubenswrapper[4606]: I0123 18:05:29.934311 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:05:29 crc kubenswrapper[4606]: I0123 18:05:29.934386 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 18:05:29 crc kubenswrapper[4606]: I0123 18:05:29.935559 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32a42a51a172c9749ed45ab6af919bb2a8f13883be55d8eda017ba07de4ecd43"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:05:29 crc kubenswrapper[4606]: I0123 18:05:29.935798 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://32a42a51a172c9749ed45ab6af919bb2a8f13883be55d8eda017ba07de4ecd43" gracePeriod=600 Jan 23 18:05:30 crc kubenswrapper[4606]: I0123 18:05:30.344413 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="32a42a51a172c9749ed45ab6af919bb2a8f13883be55d8eda017ba07de4ecd43" exitCode=0 Jan 23 18:05:30 crc kubenswrapper[4606]: I0123 18:05:30.344627 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"32a42a51a172c9749ed45ab6af919bb2a8f13883be55d8eda017ba07de4ecd43"} Jan 23 18:05:30 crc kubenswrapper[4606]: I0123 18:05:30.344990 4606 scope.go:117] "RemoveContainer" containerID="50df6cf30909fa2c692b021189d19939b1278bfa82d44ec68856cc0645bc1125" Jan 23 18:05:31 crc kubenswrapper[4606]: I0123 18:05:31.357040 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d"} Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.206858 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2w9n5"] Jan 23 18:05:56 crc kubenswrapper[4606]: E0123 18:05:56.207774 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="extract-content" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.207792 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="extract-content" Jan 23 18:05:56 crc kubenswrapper[4606]: E0123 18:05:56.207817 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="registry-server" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.207825 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="registry-server" Jan 23 18:05:56 crc kubenswrapper[4606]: E0123 18:05:56.207848 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="extract-utilities" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.207857 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="extract-utilities" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.208093 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="31964617-33b7-46ce-9bf6-d59425ff0087" containerName="registry-server" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.209725 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.270899 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2w9n5"] Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.306559 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6hg\" (UniqueName: \"kubernetes.io/projected/a59837c6-57c4-42e6-8c37-38a9cd494f1a-kube-api-access-7b6hg\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.306852 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-utilities\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.306987 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-catalog-content\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.409082 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b6hg\" (UniqueName: \"kubernetes.io/projected/a59837c6-57c4-42e6-8c37-38a9cd494f1a-kube-api-access-7b6hg\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.409228 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-utilities\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.409299 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-catalog-content\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.409824 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-utilities\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.409885 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-catalog-content\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.432095 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b6hg\" (UniqueName: \"kubernetes.io/projected/a59837c6-57c4-42e6-8c37-38a9cd494f1a-kube-api-access-7b6hg\") pod \"certified-operators-2w9n5\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:56 crc kubenswrapper[4606]: I0123 18:05:56.529782 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:05:57 crc kubenswrapper[4606]: I0123 18:05:57.071998 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2w9n5"] Jan 23 18:05:57 crc kubenswrapper[4606]: I0123 18:05:57.626050 4606 generic.go:334] "Generic (PLEG): container finished" podID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerID="8627fab175b5703439c1052b29c93a48eccd6391d2e5fd050ac5789fbce64ea8" exitCode=0 Jan 23 18:05:57 crc kubenswrapper[4606]: I0123 18:05:57.626133 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w9n5" event={"ID":"a59837c6-57c4-42e6-8c37-38a9cd494f1a","Type":"ContainerDied","Data":"8627fab175b5703439c1052b29c93a48eccd6391d2e5fd050ac5789fbce64ea8"} Jan 23 18:05:57 crc kubenswrapper[4606]: I0123 18:05:57.626350 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w9n5" event={"ID":"a59837c6-57c4-42e6-8c37-38a9cd494f1a","Type":"ContainerStarted","Data":"cb8a2cefaba0433a4c658d0a587c1b9c3f6b0e0e9ddc8e45d14a5971f8703b0f"} Jan 23 18:05:57 crc kubenswrapper[4606]: I0123 18:05:57.629018 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:05:58 crc kubenswrapper[4606]: I0123 18:05:58.639309 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w9n5" event={"ID":"a59837c6-57c4-42e6-8c37-38a9cd494f1a","Type":"ContainerStarted","Data":"7805904a785d0b4b5674a721dd705234390ee5522be071e60221deebc7ae795f"} Jan 23 18:05:59 crc kubenswrapper[4606]: I0123 18:05:59.647986 4606 generic.go:334] "Generic (PLEG): container finished" podID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerID="7805904a785d0b4b5674a721dd705234390ee5522be071e60221deebc7ae795f" exitCode=0 Jan 23 18:05:59 crc kubenswrapper[4606]: I0123 18:05:59.648026 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w9n5" event={"ID":"a59837c6-57c4-42e6-8c37-38a9cd494f1a","Type":"ContainerDied","Data":"7805904a785d0b4b5674a721dd705234390ee5522be071e60221deebc7ae795f"} Jan 23 18:06:00 crc kubenswrapper[4606]: I0123 18:06:00.659471 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w9n5" event={"ID":"a59837c6-57c4-42e6-8c37-38a9cd494f1a","Type":"ContainerStarted","Data":"390c627b31e2752b9db7e47f399c5ce9c69470ee63f39460d8d2eb5bf92268c0"} Jan 23 18:06:06 crc kubenswrapper[4606]: I0123 18:06:06.530211 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:06:06 crc kubenswrapper[4606]: I0123 18:06:06.530823 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:06:06 crc kubenswrapper[4606]: I0123 18:06:06.583418 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:06:06 crc kubenswrapper[4606]: I0123 18:06:06.619243 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2w9n5" podStartSLOduration=8.133893124 podStartE2EDuration="10.6192262s" podCreationTimestamp="2026-01-23 18:05:56 +0000 UTC" firstStartedPulling="2026-01-23 18:05:57.628704191 +0000 UTC m=+4391.628896597" lastFinishedPulling="2026-01-23 18:06:00.114037267 +0000 UTC m=+4394.114229673" observedRunningTime="2026-01-23 18:06:00.687086444 +0000 UTC m=+4394.687278860" watchObservedRunningTime="2026-01-23 18:06:06.6192262 +0000 UTC m=+4400.619418606" Jan 23 18:06:06 crc kubenswrapper[4606]: I0123 18:06:06.758708 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:06:07 crc kubenswrapper[4606]: I0123 18:06:07.346294 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2w9n5"] Jan 23 18:06:08 crc kubenswrapper[4606]: I0123 18:06:08.726951 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2w9n5" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="registry-server" containerID="cri-o://390c627b31e2752b9db7e47f399c5ce9c69470ee63f39460d8d2eb5bf92268c0" gracePeriod=2 Jan 23 18:06:09 crc kubenswrapper[4606]: I0123 18:06:09.741197 4606 generic.go:334] "Generic (PLEG): container finished" podID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerID="390c627b31e2752b9db7e47f399c5ce9c69470ee63f39460d8d2eb5bf92268c0" exitCode=0 Jan 23 18:06:09 crc kubenswrapper[4606]: I0123 18:06:09.741259 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w9n5" event={"ID":"a59837c6-57c4-42e6-8c37-38a9cd494f1a","Type":"ContainerDied","Data":"390c627b31e2752b9db7e47f399c5ce9c69470ee63f39460d8d2eb5bf92268c0"} Jan 23 18:06:09 crc kubenswrapper[4606]: I0123 18:06:09.949484 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.108168 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-utilities\") pod \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.108256 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b6hg\" (UniqueName: \"kubernetes.io/projected/a59837c6-57c4-42e6-8c37-38a9cd494f1a-kube-api-access-7b6hg\") pod \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.108289 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-catalog-content\") pod \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\" (UID: \"a59837c6-57c4-42e6-8c37-38a9cd494f1a\") " Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.115484 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-utilities" (OuterVolumeSpecName: "utilities") pod "a59837c6-57c4-42e6-8c37-38a9cd494f1a" (UID: "a59837c6-57c4-42e6-8c37-38a9cd494f1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.121013 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59837c6-57c4-42e6-8c37-38a9cd494f1a-kube-api-access-7b6hg" (OuterVolumeSpecName: "kube-api-access-7b6hg") pod "a59837c6-57c4-42e6-8c37-38a9cd494f1a" (UID: "a59837c6-57c4-42e6-8c37-38a9cd494f1a"). InnerVolumeSpecName "kube-api-access-7b6hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.161612 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a59837c6-57c4-42e6-8c37-38a9cd494f1a" (UID: "a59837c6-57c4-42e6-8c37-38a9cd494f1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.210446 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.210483 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b6hg\" (UniqueName: \"kubernetes.io/projected/a59837c6-57c4-42e6-8c37-38a9cd494f1a-kube-api-access-7b6hg\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.210493 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59837c6-57c4-42e6-8c37-38a9cd494f1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.788184 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w9n5" event={"ID":"a59837c6-57c4-42e6-8c37-38a9cd494f1a","Type":"ContainerDied","Data":"cb8a2cefaba0433a4c658d0a587c1b9c3f6b0e0e9ddc8e45d14a5971f8703b0f"} Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.788257 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w9n5" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.788436 4606 scope.go:117] "RemoveContainer" containerID="390c627b31e2752b9db7e47f399c5ce9c69470ee63f39460d8d2eb5bf92268c0" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.820635 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2w9n5"] Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.831879 4606 scope.go:117] "RemoveContainer" containerID="7805904a785d0b4b5674a721dd705234390ee5522be071e60221deebc7ae795f" Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.833063 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2w9n5"] Jan 23 18:06:10 crc kubenswrapper[4606]: I0123 18:06:10.865117 4606 scope.go:117] "RemoveContainer" containerID="8627fab175b5703439c1052b29c93a48eccd6391d2e5fd050ac5789fbce64ea8" Jan 23 18:06:12 crc kubenswrapper[4606]: I0123 18:06:12.379033 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" path="/var/lib/kubelet/pods/a59837c6-57c4-42e6-8c37-38a9cd494f1a/volumes" Jan 23 18:07:59 crc kubenswrapper[4606]: I0123 18:07:59.934505 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:07:59 crc kubenswrapper[4606]: I0123 18:07:59.935370 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4606]: I0123 18:08:29.933816 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4606]: I0123 18:08:29.934355 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:08:59 crc kubenswrapper[4606]: I0123 18:08:59.934085 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:08:59 crc kubenswrapper[4606]: I0123 18:08:59.934708 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:08:59 crc kubenswrapper[4606]: I0123 18:08:59.934762 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 18:08:59 crc kubenswrapper[4606]: I0123 18:08:59.935541 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:08:59 crc kubenswrapper[4606]: I0123 18:08:59.935620 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" gracePeriod=600 Jan 23 18:09:00 crc kubenswrapper[4606]: E0123 18:09:00.131793 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:09:00 crc kubenswrapper[4606]: I0123 18:09:00.428785 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" exitCode=0 Jan 23 18:09:00 crc kubenswrapper[4606]: I0123 18:09:00.428837 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d"} Jan 23 18:09:00 crc kubenswrapper[4606]: I0123 18:09:00.429214 4606 scope.go:117] "RemoveContainer" containerID="32a42a51a172c9749ed45ab6af919bb2a8f13883be55d8eda017ba07de4ecd43" Jan 23 18:09:00 crc kubenswrapper[4606]: I0123 18:09:00.430312 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:09:00 crc kubenswrapper[4606]: E0123 18:09:00.430805 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:09:12 crc kubenswrapper[4606]: I0123 18:09:12.366473 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:09:12 crc kubenswrapper[4606]: E0123 18:09:12.367224 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:09:24 crc kubenswrapper[4606]: I0123 18:09:24.365744 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:09:24 crc kubenswrapper[4606]: E0123 18:09:24.366536 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:09:35 crc kubenswrapper[4606]: I0123 18:09:35.366420 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:09:35 crc kubenswrapper[4606]: E0123 18:09:35.367167 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:09:49 crc kubenswrapper[4606]: I0123 18:09:49.366550 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:09:49 crc kubenswrapper[4606]: E0123 18:09:49.368088 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:10:02 crc kubenswrapper[4606]: I0123 18:10:02.365980 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:10:02 crc kubenswrapper[4606]: E0123 18:10:02.366939 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.143601 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npb6d"] Jan 23 18:10:04 crc kubenswrapper[4606]: E0123 18:10:04.144480 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="registry-server" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.144496 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="registry-server" Jan 23 18:10:04 crc kubenswrapper[4606]: E0123 18:10:04.144510 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="extract-utilities" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.144519 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="extract-utilities" Jan 23 18:10:04 crc kubenswrapper[4606]: E0123 18:10:04.144544 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="extract-content" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.144553 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="extract-content" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.144792 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59837c6-57c4-42e6-8c37-38a9cd494f1a" containerName="registry-server" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.146906 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.159455 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npb6d"] Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.259524 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-catalog-content\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.259937 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tl6l\" (UniqueName: \"kubernetes.io/projected/83ac26f4-7448-4395-8b03-7d1971b28975-kube-api-access-7tl6l\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.260085 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-utilities\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.362298 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-catalog-content\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.362818 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tl6l\" (UniqueName: \"kubernetes.io/projected/83ac26f4-7448-4395-8b03-7d1971b28975-kube-api-access-7tl6l\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.362927 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-catalog-content\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.363083 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-utilities\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.363405 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-utilities\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.382895 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tl6l\" (UniqueName: \"kubernetes.io/projected/83ac26f4-7448-4395-8b03-7d1971b28975-kube-api-access-7tl6l\") pod \"redhat-marketplace-npb6d\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:04 crc kubenswrapper[4606]: I0123 18:10:04.470654 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:05 crc kubenswrapper[4606]: I0123 18:10:05.018563 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npb6d"] Jan 23 18:10:05 crc kubenswrapper[4606]: I0123 18:10:05.054618 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npb6d" event={"ID":"83ac26f4-7448-4395-8b03-7d1971b28975","Type":"ContainerStarted","Data":"e62d92f8b9d3301a3c3f9c007a8987838b8619854f5ccaa2a1da2a09addcd3fb"} Jan 23 18:10:06 crc kubenswrapper[4606]: I0123 18:10:06.063914 4606 generic.go:334] "Generic (PLEG): container finished" podID="83ac26f4-7448-4395-8b03-7d1971b28975" containerID="24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61" exitCode=0 Jan 23 18:10:06 crc kubenswrapper[4606]: I0123 18:10:06.064028 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npb6d" event={"ID":"83ac26f4-7448-4395-8b03-7d1971b28975","Type":"ContainerDied","Data":"24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61"} Jan 23 18:10:08 crc kubenswrapper[4606]: I0123 18:10:08.086479 4606 generic.go:334] "Generic (PLEG): container finished" podID="83ac26f4-7448-4395-8b03-7d1971b28975" containerID="4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d" exitCode=0 Jan 23 18:10:08 crc kubenswrapper[4606]: I0123 18:10:08.086718 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npb6d" event={"ID":"83ac26f4-7448-4395-8b03-7d1971b28975","Type":"ContainerDied","Data":"4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d"} Jan 23 18:10:09 crc kubenswrapper[4606]: I0123 18:10:09.100427 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npb6d" event={"ID":"83ac26f4-7448-4395-8b03-7d1971b28975","Type":"ContainerStarted","Data":"ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e"} Jan 23 18:10:09 crc kubenswrapper[4606]: I0123 18:10:09.132732 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npb6d" podStartSLOduration=2.454389443 podStartE2EDuration="5.132707051s" podCreationTimestamp="2026-01-23 18:10:04 +0000 UTC" firstStartedPulling="2026-01-23 18:10:06.065904593 +0000 UTC m=+4640.066096999" lastFinishedPulling="2026-01-23 18:10:08.744222201 +0000 UTC m=+4642.744414607" observedRunningTime="2026-01-23 18:10:09.119664771 +0000 UTC m=+4643.119857187" watchObservedRunningTime="2026-01-23 18:10:09.132707051 +0000 UTC m=+4643.132899457" Jan 23 18:10:14 crc kubenswrapper[4606]: I0123 18:10:14.365741 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:10:14 crc kubenswrapper[4606]: E0123 18:10:14.366452 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:10:14 crc kubenswrapper[4606]: I0123 18:10:14.471091 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:14 crc kubenswrapper[4606]: I0123 18:10:14.471146 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:14 crc kubenswrapper[4606]: I0123 18:10:14.558035 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:15 crc kubenswrapper[4606]: I0123 18:10:15.201708 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:15 crc kubenswrapper[4606]: I0123 18:10:15.261376 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npb6d"] Jan 23 18:10:17 crc kubenswrapper[4606]: I0123 18:10:17.172769 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npb6d" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="registry-server" containerID="cri-o://ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e" gracePeriod=2 Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.679524 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.729809 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tl6l\" (UniqueName: \"kubernetes.io/projected/83ac26f4-7448-4395-8b03-7d1971b28975-kube-api-access-7tl6l\") pod \"83ac26f4-7448-4395-8b03-7d1971b28975\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.729881 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-catalog-content\") pod \"83ac26f4-7448-4395-8b03-7d1971b28975\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.745319 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ac26f4-7448-4395-8b03-7d1971b28975-kube-api-access-7tl6l" (OuterVolumeSpecName: "kube-api-access-7tl6l") pod "83ac26f4-7448-4395-8b03-7d1971b28975" (UID: "83ac26f4-7448-4395-8b03-7d1971b28975"). InnerVolumeSpecName "kube-api-access-7tl6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.771242 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83ac26f4-7448-4395-8b03-7d1971b28975" (UID: "83ac26f4-7448-4395-8b03-7d1971b28975"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.833508 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-utilities\") pod \"83ac26f4-7448-4395-8b03-7d1971b28975\" (UID: \"83ac26f4-7448-4395-8b03-7d1971b28975\") " Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.834457 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-utilities" (OuterVolumeSpecName: "utilities") pod "83ac26f4-7448-4395-8b03-7d1971b28975" (UID: "83ac26f4-7448-4395-8b03-7d1971b28975"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.834860 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.834878 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tl6l\" (UniqueName: \"kubernetes.io/projected/83ac26f4-7448-4395-8b03-7d1971b28975-kube-api-access-7tl6l\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:17.834891 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83ac26f4-7448-4395-8b03-7d1971b28975-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.188389 4606 generic.go:334] "Generic (PLEG): container finished" podID="83ac26f4-7448-4395-8b03-7d1971b28975" containerID="ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e" exitCode=0 Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.188437 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npb6d" event={"ID":"83ac26f4-7448-4395-8b03-7d1971b28975","Type":"ContainerDied","Data":"ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e"} Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.188466 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npb6d" event={"ID":"83ac26f4-7448-4395-8b03-7d1971b28975","Type":"ContainerDied","Data":"e62d92f8b9d3301a3c3f9c007a8987838b8619854f5ccaa2a1da2a09addcd3fb"} Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.188486 4606 scope.go:117] "RemoveContainer" containerID="ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.188498 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npb6d" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.223161 4606 scope.go:117] "RemoveContainer" containerID="4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.251016 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npb6d"] Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.266194 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npb6d"] Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.268330 4606 scope.go:117] "RemoveContainer" containerID="24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.312384 4606 scope.go:117] "RemoveContainer" containerID="ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e" Jan 23 18:10:18 crc kubenswrapper[4606]: E0123 18:10:18.313253 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e\": container with ID starting with ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e not found: ID does not exist" containerID="ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.313287 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e"} err="failed to get container status \"ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e\": rpc error: code = NotFound desc = could not find container \"ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e\": container with ID starting with ebd5183f73b093dc1277b0b2fe3926fa6c526466be60a83514f667331b11474e not found: ID does not exist" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.313308 4606 scope.go:117] "RemoveContainer" containerID="4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d" Jan 23 18:10:18 crc kubenswrapper[4606]: E0123 18:10:18.314714 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d\": container with ID starting with 4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d not found: ID does not exist" containerID="4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.314811 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d"} err="failed to get container status \"4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d\": rpc error: code = NotFound desc = could not find container \"4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d\": container with ID starting with 4bd161b5f24bce773a136f645d71ff1720fc064e54a46bc9955afbdb6afdcb3d not found: ID does not exist" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.314842 4606 scope.go:117] "RemoveContainer" containerID="24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61" Jan 23 18:10:18 crc kubenswrapper[4606]: E0123 18:10:18.315214 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61\": container with ID starting with 24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61 not found: ID does not exist" containerID="24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.315252 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61"} err="failed to get container status \"24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61\": rpc error: code = NotFound desc = could not find container \"24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61\": container with ID starting with 24e9139b5f4b6fc4fbf3f038a0fc9277853fa70aa38484f1aa0fcf8c90e38f61 not found: ID does not exist" Jan 23 18:10:18 crc kubenswrapper[4606]: I0123 18:10:18.382236 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" path="/var/lib/kubelet/pods/83ac26f4-7448-4395-8b03-7d1971b28975/volumes" Jan 23 18:10:26 crc kubenswrapper[4606]: I0123 18:10:26.374376 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:10:26 crc kubenswrapper[4606]: E0123 18:10:26.375164 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:10:41 crc kubenswrapper[4606]: I0123 18:10:41.366195 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:10:41 crc kubenswrapper[4606]: E0123 18:10:41.366928 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:10:52 crc kubenswrapper[4606]: I0123 18:10:52.366231 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:10:52 crc kubenswrapper[4606]: E0123 18:10:52.367676 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:11:07 crc kubenswrapper[4606]: I0123 18:11:07.366572 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:11:07 crc kubenswrapper[4606]: E0123 18:11:07.367429 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:11:19 crc kubenswrapper[4606]: I0123 18:11:19.365345 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:11:19 crc kubenswrapper[4606]: E0123 18:11:19.366992 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:11:33 crc kubenswrapper[4606]: I0123 18:11:33.365734 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:11:33 crc kubenswrapper[4606]: E0123 18:11:33.366493 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:11:44 crc kubenswrapper[4606]: I0123 18:11:44.366466 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:11:44 crc kubenswrapper[4606]: E0123 18:11:44.367255 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:11:58 crc kubenswrapper[4606]: I0123 18:11:58.371046 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:11:58 crc kubenswrapper[4606]: E0123 18:11:58.374689 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:12:10 crc kubenswrapper[4606]: I0123 18:12:10.366452 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:12:10 crc kubenswrapper[4606]: E0123 18:12:10.368003 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:12:24 crc kubenswrapper[4606]: I0123 18:12:24.365856 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:12:24 crc kubenswrapper[4606]: E0123 18:12:24.366903 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:12:35 crc kubenswrapper[4606]: I0123 18:12:35.366126 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:12:35 crc kubenswrapper[4606]: E0123 18:12:35.366960 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:12:46 crc kubenswrapper[4606]: I0123 18:12:46.374708 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:12:46 crc kubenswrapper[4606]: E0123 18:12:46.375130 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:12:59 crc kubenswrapper[4606]: I0123 18:12:59.365916 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:12:59 crc kubenswrapper[4606]: E0123 18:12:59.367325 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.734846 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z8sbw"] Jan 23 18:13:05 crc kubenswrapper[4606]: E0123 18:13:05.735916 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="extract-content" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.736014 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="extract-content" Jan 23 18:13:05 crc kubenswrapper[4606]: E0123 18:13:05.736029 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="extract-utilities" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.736038 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="extract-utilities" Jan 23 18:13:05 crc kubenswrapper[4606]: E0123 18:13:05.736068 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="registry-server" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.736076 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="registry-server" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.736312 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ac26f4-7448-4395-8b03-7d1971b28975" containerName="registry-server" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.738181 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.758200 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8sbw"] Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.891902 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-catalog-content\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.892328 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-utilities\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.892429 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fzj\" (UniqueName: \"kubernetes.io/projected/de7f0b2d-e14d-4020-8786-48ded9ca45be-kube-api-access-n4fzj\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.994838 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-utilities\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.994916 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4fzj\" (UniqueName: \"kubernetes.io/projected/de7f0b2d-e14d-4020-8786-48ded9ca45be-kube-api-access-n4fzj\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.995010 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-catalog-content\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.995367 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-utilities\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:05 crc kubenswrapper[4606]: I0123 18:13:05.995394 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-catalog-content\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:06 crc kubenswrapper[4606]: I0123 18:13:06.015648 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4fzj\" (UniqueName: \"kubernetes.io/projected/de7f0b2d-e14d-4020-8786-48ded9ca45be-kube-api-access-n4fzj\") pod \"community-operators-z8sbw\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:06 crc kubenswrapper[4606]: I0123 18:13:06.057094 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:06 crc kubenswrapper[4606]: I0123 18:13:06.742242 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8sbw"] Jan 23 18:13:06 crc kubenswrapper[4606]: W0123 18:13:06.760102 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde7f0b2d_e14d_4020_8786_48ded9ca45be.slice/crio-b63e91f5662ace859423fb2a0a7d519c095c96145a20fd68597d14a8a2ef17f3 WatchSource:0}: Error finding container b63e91f5662ace859423fb2a0a7d519c095c96145a20fd68597d14a8a2ef17f3: Status 404 returned error can't find the container with id b63e91f5662ace859423fb2a0a7d519c095c96145a20fd68597d14a8a2ef17f3 Jan 23 18:13:07 crc kubenswrapper[4606]: I0123 18:13:07.751775 4606 generic.go:334] "Generic (PLEG): container finished" podID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerID="d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928" exitCode=0 Jan 23 18:13:07 crc kubenswrapper[4606]: I0123 18:13:07.752048 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8sbw" event={"ID":"de7f0b2d-e14d-4020-8786-48ded9ca45be","Type":"ContainerDied","Data":"d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928"} Jan 23 18:13:07 crc kubenswrapper[4606]: I0123 18:13:07.752078 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8sbw" event={"ID":"de7f0b2d-e14d-4020-8786-48ded9ca45be","Type":"ContainerStarted","Data":"b63e91f5662ace859423fb2a0a7d519c095c96145a20fd68597d14a8a2ef17f3"} Jan 23 18:13:07 crc kubenswrapper[4606]: I0123 18:13:07.754206 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:13:08 crc kubenswrapper[4606]: I0123 18:13:08.765803 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8sbw" event={"ID":"de7f0b2d-e14d-4020-8786-48ded9ca45be","Type":"ContainerStarted","Data":"55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f"} Jan 23 18:13:09 crc kubenswrapper[4606]: I0123 18:13:09.776268 4606 generic.go:334] "Generic (PLEG): container finished" podID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerID="55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f" exitCode=0 Jan 23 18:13:09 crc kubenswrapper[4606]: I0123 18:13:09.776332 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8sbw" event={"ID":"de7f0b2d-e14d-4020-8786-48ded9ca45be","Type":"ContainerDied","Data":"55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f"} Jan 23 18:13:14 crc kubenswrapper[4606]: I0123 18:13:14.365567 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:13:14 crc kubenswrapper[4606]: E0123 18:13:14.366354 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:13:16 crc kubenswrapper[4606]: I0123 18:13:16.850512 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8sbw" event={"ID":"de7f0b2d-e14d-4020-8786-48ded9ca45be","Type":"ContainerStarted","Data":"325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e"} Jan 23 18:13:16 crc kubenswrapper[4606]: I0123 18:13:16.879968 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z8sbw" podStartSLOduration=3.891158542 podStartE2EDuration="11.879942012s" podCreationTimestamp="2026-01-23 18:13:05 +0000 UTC" firstStartedPulling="2026-01-23 18:13:07.753959636 +0000 UTC m=+4821.754152042" lastFinishedPulling="2026-01-23 18:13:15.742743106 +0000 UTC m=+4829.742935512" observedRunningTime="2026-01-23 18:13:16.874014747 +0000 UTC m=+4830.874207173" watchObservedRunningTime="2026-01-23 18:13:16.879942012 +0000 UTC m=+4830.880134418" Jan 23 18:13:26 crc kubenswrapper[4606]: I0123 18:13:26.058198 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:26 crc kubenswrapper[4606]: I0123 18:13:26.059922 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:26 crc kubenswrapper[4606]: I0123 18:13:26.110899 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:27 crc kubenswrapper[4606]: I0123 18:13:27.004727 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:27 crc kubenswrapper[4606]: I0123 18:13:27.060564 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8sbw"] Jan 23 18:13:27 crc kubenswrapper[4606]: I0123 18:13:27.367773 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:13:27 crc kubenswrapper[4606]: E0123 18:13:27.368279 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:13:28 crc kubenswrapper[4606]: I0123 18:13:28.971503 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z8sbw" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="registry-server" containerID="cri-o://325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e" gracePeriod=2 Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.489601 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.554835 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-utilities\") pod \"de7f0b2d-e14d-4020-8786-48ded9ca45be\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.555059 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-catalog-content\") pod \"de7f0b2d-e14d-4020-8786-48ded9ca45be\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.555089 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4fzj\" (UniqueName: \"kubernetes.io/projected/de7f0b2d-e14d-4020-8786-48ded9ca45be-kube-api-access-n4fzj\") pod \"de7f0b2d-e14d-4020-8786-48ded9ca45be\" (UID: \"de7f0b2d-e14d-4020-8786-48ded9ca45be\") " Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.555564 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-utilities" (OuterVolumeSpecName: "utilities") pod "de7f0b2d-e14d-4020-8786-48ded9ca45be" (UID: "de7f0b2d-e14d-4020-8786-48ded9ca45be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.562767 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7f0b2d-e14d-4020-8786-48ded9ca45be-kube-api-access-n4fzj" (OuterVolumeSpecName: "kube-api-access-n4fzj") pod "de7f0b2d-e14d-4020-8786-48ded9ca45be" (UID: "de7f0b2d-e14d-4020-8786-48ded9ca45be"). InnerVolumeSpecName "kube-api-access-n4fzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.615441 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de7f0b2d-e14d-4020-8786-48ded9ca45be" (UID: "de7f0b2d-e14d-4020-8786-48ded9ca45be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.657393 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.657423 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7f0b2d-e14d-4020-8786-48ded9ca45be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.657435 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4fzj\" (UniqueName: \"kubernetes.io/projected/de7f0b2d-e14d-4020-8786-48ded9ca45be-kube-api-access-n4fzj\") on node \"crc\" DevicePath \"\"" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.983388 4606 generic.go:334] "Generic (PLEG): container finished" podID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerID="325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e" exitCode=0 Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.983452 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8sbw" Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.983471 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8sbw" event={"ID":"de7f0b2d-e14d-4020-8786-48ded9ca45be","Type":"ContainerDied","Data":"325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e"} Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.984450 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8sbw" event={"ID":"de7f0b2d-e14d-4020-8786-48ded9ca45be","Type":"ContainerDied","Data":"b63e91f5662ace859423fb2a0a7d519c095c96145a20fd68597d14a8a2ef17f3"} Jan 23 18:13:29 crc kubenswrapper[4606]: I0123 18:13:29.984490 4606 scope.go:117] "RemoveContainer" containerID="325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.016307 4606 scope.go:117] "RemoveContainer" containerID="55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.040920 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8sbw"] Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.047871 4606 scope.go:117] "RemoveContainer" containerID="d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.050942 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z8sbw"] Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.091070 4606 scope.go:117] "RemoveContainer" containerID="325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e" Jan 23 18:13:30 crc kubenswrapper[4606]: E0123 18:13:30.091466 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e\": container with ID starting with 325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e not found: ID does not exist" containerID="325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.091507 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e"} err="failed to get container status \"325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e\": rpc error: code = NotFound desc = could not find container \"325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e\": container with ID starting with 325894048ee9419c240a1986dbb0b4464ddc17e5a8db6ee6b281c05f8fcf3b2e not found: ID does not exist" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.091537 4606 scope.go:117] "RemoveContainer" containerID="55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f" Jan 23 18:13:30 crc kubenswrapper[4606]: E0123 18:13:30.092056 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f\": container with ID starting with 55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f not found: ID does not exist" containerID="55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.092081 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f"} err="failed to get container status \"55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f\": rpc error: code = NotFound desc = could not find container \"55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f\": container with ID starting with 55f086e13fdc898fd618418b3c3ad3b8075aefe18fba16f3aa35314e678a413f not found: ID does not exist" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.092119 4606 scope.go:117] "RemoveContainer" containerID="d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928" Jan 23 18:13:30 crc kubenswrapper[4606]: E0123 18:13:30.092533 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928\": container with ID starting with d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928 not found: ID does not exist" containerID="d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.092563 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928"} err="failed to get container status \"d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928\": rpc error: code = NotFound desc = could not find container \"d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928\": container with ID starting with d3bb11e893021534623e48922c1d46a8989bebdade307007b925797bbb87c928 not found: ID does not exist" Jan 23 18:13:30 crc kubenswrapper[4606]: I0123 18:13:30.378263 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" path="/var/lib/kubelet/pods/de7f0b2d-e14d-4020-8786-48ded9ca45be/volumes" Jan 23 18:13:40 crc kubenswrapper[4606]: I0123 18:13:40.366011 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:13:40 crc kubenswrapper[4606]: E0123 18:13:40.366777 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:13:54 crc kubenswrapper[4606]: I0123 18:13:54.365942 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:13:54 crc kubenswrapper[4606]: E0123 18:13:54.366945 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:14:05 crc kubenswrapper[4606]: I0123 18:14:05.366276 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:14:06 crc kubenswrapper[4606]: I0123 18:14:06.296130 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"3e20276909d063d72910761ea3cff4cdf7109a1b4f91e48afefdf67b8397aeae"} Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.730632 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wgh26"] Jan 23 18:14:27 crc kubenswrapper[4606]: E0123 18:14:27.732169 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="extract-content" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.732188 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="extract-content" Jan 23 18:14:27 crc kubenswrapper[4606]: E0123 18:14:27.732209 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="extract-utilities" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.732217 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="extract-utilities" Jan 23 18:14:27 crc kubenswrapper[4606]: E0123 18:14:27.732243 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="registry-server" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.732249 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="registry-server" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.732468 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7f0b2d-e14d-4020-8786-48ded9ca45be" containerName="registry-server" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.734273 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.747715 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wgh26"] Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.910203 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2nfl\" (UniqueName: \"kubernetes.io/projected/9faa3d60-4d31-4990-822b-0cb1b4b28f71-kube-api-access-z2nfl\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.910274 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-catalog-content\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:27 crc kubenswrapper[4606]: I0123 18:14:27.910419 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-utilities\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.012458 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-utilities\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.012558 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2nfl\" (UniqueName: \"kubernetes.io/projected/9faa3d60-4d31-4990-822b-0cb1b4b28f71-kube-api-access-z2nfl\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.012598 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-catalog-content\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.013086 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-catalog-content\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.013193 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-utilities\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.035409 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2nfl\" (UniqueName: \"kubernetes.io/projected/9faa3d60-4d31-4990-822b-0cb1b4b28f71-kube-api-access-z2nfl\") pod \"redhat-operators-wgh26\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.061938 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:28 crc kubenswrapper[4606]: I0123 18:14:28.682119 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wgh26"] Jan 23 18:14:29 crc kubenswrapper[4606]: I0123 18:14:29.507992 4606 generic.go:334] "Generic (PLEG): container finished" podID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerID="9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58" exitCode=0 Jan 23 18:14:29 crc kubenswrapper[4606]: I0123 18:14:29.508100 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgh26" event={"ID":"9faa3d60-4d31-4990-822b-0cb1b4b28f71","Type":"ContainerDied","Data":"9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58"} Jan 23 18:14:29 crc kubenswrapper[4606]: I0123 18:14:29.508313 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgh26" event={"ID":"9faa3d60-4d31-4990-822b-0cb1b4b28f71","Type":"ContainerStarted","Data":"38f37d8029455dcdfd3aadfeb4c281908cf5eea41b0da3a25b4ae42cc0471ee5"} Jan 23 18:14:31 crc kubenswrapper[4606]: I0123 18:14:31.528037 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgh26" event={"ID":"9faa3d60-4d31-4990-822b-0cb1b4b28f71","Type":"ContainerStarted","Data":"4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135"} Jan 23 18:14:35 crc kubenswrapper[4606]: I0123 18:14:35.564881 4606 generic.go:334] "Generic (PLEG): container finished" podID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerID="4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135" exitCode=0 Jan 23 18:14:35 crc kubenswrapper[4606]: I0123 18:14:35.564987 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgh26" event={"ID":"9faa3d60-4d31-4990-822b-0cb1b4b28f71","Type":"ContainerDied","Data":"4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135"} Jan 23 18:14:41 crc kubenswrapper[4606]: I0123 18:14:41.629898 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgh26" event={"ID":"9faa3d60-4d31-4990-822b-0cb1b4b28f71","Type":"ContainerStarted","Data":"bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8"} Jan 23 18:14:41 crc kubenswrapper[4606]: I0123 18:14:41.661020 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wgh26" podStartSLOduration=3.338535728 podStartE2EDuration="14.660999127s" podCreationTimestamp="2026-01-23 18:14:27 +0000 UTC" firstStartedPulling="2026-01-23 18:14:29.510221972 +0000 UTC m=+4903.510414378" lastFinishedPulling="2026-01-23 18:14:40.832685371 +0000 UTC m=+4914.832877777" observedRunningTime="2026-01-23 18:14:41.651072994 +0000 UTC m=+4915.651265420" watchObservedRunningTime="2026-01-23 18:14:41.660999127 +0000 UTC m=+4915.661191533" Jan 23 18:14:48 crc kubenswrapper[4606]: I0123 18:14:48.062669 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:48 crc kubenswrapper[4606]: I0123 18:14:48.063209 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:48 crc kubenswrapper[4606]: I0123 18:14:48.123773 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:48 crc kubenswrapper[4606]: I0123 18:14:48.765358 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:48 crc kubenswrapper[4606]: I0123 18:14:48.862046 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wgh26"] Jan 23 18:14:50 crc kubenswrapper[4606]: I0123 18:14:50.709846 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wgh26" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="registry-server" containerID="cri-o://bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8" gracePeriod=2 Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.220213 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.320693 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2nfl\" (UniqueName: \"kubernetes.io/projected/9faa3d60-4d31-4990-822b-0cb1b4b28f71-kube-api-access-z2nfl\") pod \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.321643 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-catalog-content\") pod \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.321683 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-utilities\") pod \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\" (UID: \"9faa3d60-4d31-4990-822b-0cb1b4b28f71\") " Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.322379 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-utilities" (OuterVolumeSpecName: "utilities") pod "9faa3d60-4d31-4990-822b-0cb1b4b28f71" (UID: "9faa3d60-4d31-4990-822b-0cb1b4b28f71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.341003 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9faa3d60-4d31-4990-822b-0cb1b4b28f71-kube-api-access-z2nfl" (OuterVolumeSpecName: "kube-api-access-z2nfl") pod "9faa3d60-4d31-4990-822b-0cb1b4b28f71" (UID: "9faa3d60-4d31-4990-822b-0cb1b4b28f71"). InnerVolumeSpecName "kube-api-access-z2nfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.424437 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2nfl\" (UniqueName: \"kubernetes.io/projected/9faa3d60-4d31-4990-822b-0cb1b4b28f71-kube-api-access-z2nfl\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.424477 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.441619 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9faa3d60-4d31-4990-822b-0cb1b4b28f71" (UID: "9faa3d60-4d31-4990-822b-0cb1b4b28f71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.525949 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9faa3d60-4d31-4990-822b-0cb1b4b28f71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.721104 4606 generic.go:334] "Generic (PLEG): container finished" podID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerID="bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8" exitCode=0 Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.721184 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgh26" event={"ID":"9faa3d60-4d31-4990-822b-0cb1b4b28f71","Type":"ContainerDied","Data":"bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8"} Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.721210 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgh26" event={"ID":"9faa3d60-4d31-4990-822b-0cb1b4b28f71","Type":"ContainerDied","Data":"38f37d8029455dcdfd3aadfeb4c281908cf5eea41b0da3a25b4ae42cc0471ee5"} Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.721227 4606 scope.go:117] "RemoveContainer" containerID="bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.721228 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgh26" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.742885 4606 scope.go:117] "RemoveContainer" containerID="4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.757644 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wgh26"] Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.766891 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wgh26"] Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.805761 4606 scope.go:117] "RemoveContainer" containerID="9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.832278 4606 scope.go:117] "RemoveContainer" containerID="bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8" Jan 23 18:14:51 crc kubenswrapper[4606]: E0123 18:14:51.834888 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8\": container with ID starting with bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8 not found: ID does not exist" containerID="bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.834938 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8"} err="failed to get container status \"bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8\": rpc error: code = NotFound desc = could not find container \"bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8\": container with ID starting with bcd3c405a2a8070cb49c770e8ca03209c0e54440104c64e664d0df9ae63ed5f8 not found: ID does not exist" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.834960 4606 scope.go:117] "RemoveContainer" containerID="4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135" Jan 23 18:14:51 crc kubenswrapper[4606]: E0123 18:14:51.835745 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135\": container with ID starting with 4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135 not found: ID does not exist" containerID="4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.835790 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135"} err="failed to get container status \"4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135\": rpc error: code = NotFound desc = could not find container \"4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135\": container with ID starting with 4fa3580364fbdbf031809694a523e2593ff364c0d8a9a0089f3f6efa3a0de135 not found: ID does not exist" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.835817 4606 scope.go:117] "RemoveContainer" containerID="9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58" Jan 23 18:14:51 crc kubenswrapper[4606]: E0123 18:14:51.839812 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58\": container with ID starting with 9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58 not found: ID does not exist" containerID="9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58" Jan 23 18:14:51 crc kubenswrapper[4606]: I0123 18:14:51.839855 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58"} err="failed to get container status \"9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58\": rpc error: code = NotFound desc = could not find container \"9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58\": container with ID starting with 9a059efad60f1efab48f20365c085fe6349f987cb8da4c25cabc4147cad51e58 not found: ID does not exist" Jan 23 18:14:52 crc kubenswrapper[4606]: I0123 18:14:52.377563 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" path="/var/lib/kubelet/pods/9faa3d60-4d31-4990-822b-0cb1b4b28f71/volumes" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.150430 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7"] Jan 23 18:15:00 crc kubenswrapper[4606]: E0123 18:15:00.151421 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="extract-utilities" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.151435 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="extract-utilities" Jan 23 18:15:00 crc kubenswrapper[4606]: E0123 18:15:00.151454 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="extract-content" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.151461 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="extract-content" Jan 23 18:15:00 crc kubenswrapper[4606]: E0123 18:15:00.151468 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="registry-server" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.151475 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="registry-server" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.151676 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="9faa3d60-4d31-4990-822b-0cb1b4b28f71" containerName="registry-server" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.152310 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.154663 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.154829 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.170175 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7"] Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.235519 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06ab61c5-a8cb-4289-880e-6435637e22e3-config-volume\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.235681 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06ab61c5-a8cb-4289-880e-6435637e22e3-secret-volume\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.235763 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4lj\" (UniqueName: \"kubernetes.io/projected/06ab61c5-a8cb-4289-880e-6435637e22e3-kube-api-access-mx4lj\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.337688 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06ab61c5-a8cb-4289-880e-6435637e22e3-config-volume\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.337858 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06ab61c5-a8cb-4289-880e-6435637e22e3-secret-volume\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.337946 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx4lj\" (UniqueName: \"kubernetes.io/projected/06ab61c5-a8cb-4289-880e-6435637e22e3-kube-api-access-mx4lj\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.339573 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06ab61c5-a8cb-4289-880e-6435637e22e3-config-volume\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.350687 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06ab61c5-a8cb-4289-880e-6435637e22e3-secret-volume\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.358960 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx4lj\" (UniqueName: \"kubernetes.io/projected/06ab61c5-a8cb-4289-880e-6435637e22e3-kube-api-access-mx4lj\") pod \"collect-profiles-29486535-ck6f7\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.472570 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:00 crc kubenswrapper[4606]: I0123 18:15:00.923809 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7"] Jan 23 18:15:01 crc kubenswrapper[4606]: I0123 18:15:01.815812 4606 generic.go:334] "Generic (PLEG): container finished" podID="06ab61c5-a8cb-4289-880e-6435637e22e3" containerID="50fdd1dbd619b30dbf804999fb0cc558f5c397946b0aeddb81e5f0bcd8a13733" exitCode=0 Jan 23 18:15:01 crc kubenswrapper[4606]: I0123 18:15:01.815881 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" event={"ID":"06ab61c5-a8cb-4289-880e-6435637e22e3","Type":"ContainerDied","Data":"50fdd1dbd619b30dbf804999fb0cc558f5c397946b0aeddb81e5f0bcd8a13733"} Jan 23 18:15:01 crc kubenswrapper[4606]: I0123 18:15:01.816194 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" event={"ID":"06ab61c5-a8cb-4289-880e-6435637e22e3","Type":"ContainerStarted","Data":"2f106fa08f43daf4d8b9e302a3fa27d16c08c5bacffd72d1cc7934347cb60684"} Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.247278 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.302183 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06ab61c5-a8cb-4289-880e-6435637e22e3-secret-volume\") pod \"06ab61c5-a8cb-4289-880e-6435637e22e3\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.302719 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx4lj\" (UniqueName: \"kubernetes.io/projected/06ab61c5-a8cb-4289-880e-6435637e22e3-kube-api-access-mx4lj\") pod \"06ab61c5-a8cb-4289-880e-6435637e22e3\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.302921 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06ab61c5-a8cb-4289-880e-6435637e22e3-config-volume\") pod \"06ab61c5-a8cb-4289-880e-6435637e22e3\" (UID: \"06ab61c5-a8cb-4289-880e-6435637e22e3\") " Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.303559 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ab61c5-a8cb-4289-880e-6435637e22e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "06ab61c5-a8cb-4289-880e-6435637e22e3" (UID: "06ab61c5-a8cb-4289-880e-6435637e22e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.308060 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ab61c5-a8cb-4289-880e-6435637e22e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "06ab61c5-a8cb-4289-880e-6435637e22e3" (UID: "06ab61c5-a8cb-4289-880e-6435637e22e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.308426 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ab61c5-a8cb-4289-880e-6435637e22e3-kube-api-access-mx4lj" (OuterVolumeSpecName: "kube-api-access-mx4lj") pod "06ab61c5-a8cb-4289-880e-6435637e22e3" (UID: "06ab61c5-a8cb-4289-880e-6435637e22e3"). InnerVolumeSpecName "kube-api-access-mx4lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.405992 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx4lj\" (UniqueName: \"kubernetes.io/projected/06ab61c5-a8cb-4289-880e-6435637e22e3-kube-api-access-mx4lj\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.406260 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06ab61c5-a8cb-4289-880e-6435637e22e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.406437 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06ab61c5-a8cb-4289-880e-6435637e22e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.835501 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" event={"ID":"06ab61c5-a8cb-4289-880e-6435637e22e3","Type":"ContainerDied","Data":"2f106fa08f43daf4d8b9e302a3fa27d16c08c5bacffd72d1cc7934347cb60684"} Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.835538 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f106fa08f43daf4d8b9e302a3fa27d16c08c5bacffd72d1cc7934347cb60684" Jan 23 18:15:03 crc kubenswrapper[4606]: I0123 18:15:03.835576 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-ck6f7" Jan 23 18:15:04 crc kubenswrapper[4606]: I0123 18:15:04.328957 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j"] Jan 23 18:15:04 crc kubenswrapper[4606]: I0123 18:15:04.338213 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486490-xmm4j"] Jan 23 18:15:04 crc kubenswrapper[4606]: I0123 18:15:04.377027 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c44897-72c2-4d28-929c-1505fe503c88" path="/var/lib/kubelet/pods/16c44897-72c2-4d28-929c-1505fe503c88/volumes" Jan 23 18:15:05 crc kubenswrapper[4606]: I0123 18:15:05.857006 4606 generic.go:334] "Generic (PLEG): container finished" podID="9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" containerID="2372aca5dabf9579ef99020b8497e3df6495f569c064ea17a893100abbb846e0" exitCode=1 Jan 23 18:15:05 crc kubenswrapper[4606]: I0123 18:15:05.857079 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71","Type":"ContainerDied","Data":"2372aca5dabf9579ef99020b8497e3df6495f569c064ea17a893100abbb846e0"} Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.271463 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.282281 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ssh-key\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.282372 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztc5d\" (UniqueName: \"kubernetes.io/projected/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-kube-api-access-ztc5d\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.282460 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-workdir\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.282498 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-temporary\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.282547 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.283109 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-config-data\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.283139 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.283161 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config-secret\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.283180 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ca-certs\") pod \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\" (UID: \"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71\") " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.284817 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.294355 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-config-data" (OuterVolumeSpecName: "config-data") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.300361 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-kube-api-access-ztc5d" (OuterVolumeSpecName: "kube-api-access-ztc5d") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "kube-api-access-ztc5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.302919 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.305383 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.336713 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.337369 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.345903 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.373737 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" (UID: "9c7b5ed0-95db-4e9a-b824-a6dc9596dd71"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.386312 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztc5d\" (UniqueName: \"kubernetes.io/projected/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-kube-api-access-ztc5d\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.386360 4606 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.386371 4606 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.387106 4606 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.387140 4606 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.387153 4606 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.387220 4606 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.387235 4606 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.387246 4606 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c7b5ed0-95db-4e9a-b824-a6dc9596dd71-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.422541 4606 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.488356 4606 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.880461 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c7b5ed0-95db-4e9a-b824-a6dc9596dd71","Type":"ContainerDied","Data":"27f1e60ee3e1e290f7c82633527efdf508200795f22af5a97fd8df7246c0086f"} Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.880502 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f1e60ee3e1e290f7c82633527efdf508200795f22af5a97fd8df7246c0086f" Jan 23 18:15:07 crc kubenswrapper[4606]: I0123 18:15:07.880548 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.443914 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 18:15:12 crc kubenswrapper[4606]: E0123 18:15:12.444944 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" containerName="tempest-tests-tempest-tests-runner" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.444960 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" containerName="tempest-tests-tempest-tests-runner" Jan 23 18:15:12 crc kubenswrapper[4606]: E0123 18:15:12.444988 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ab61c5-a8cb-4289-880e-6435637e22e3" containerName="collect-profiles" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.444995 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ab61c5-a8cb-4289-880e-6435637e22e3" containerName="collect-profiles" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.445278 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c7b5ed0-95db-4e9a-b824-a6dc9596dd71" containerName="tempest-tests-tempest-tests-runner" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.445296 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ab61c5-a8cb-4289-880e-6435637e22e3" containerName="collect-profiles" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.446149 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.448555 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k4hqv" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.453273 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.580882 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d9d65d2d-d203-478c-b791-e2dd5397d0c4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.580978 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb24s\" (UniqueName: \"kubernetes.io/projected/d9d65d2d-d203-478c-b791-e2dd5397d0c4-kube-api-access-xb24s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d9d65d2d-d203-478c-b791-e2dd5397d0c4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.682480 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d9d65d2d-d203-478c-b791-e2dd5397d0c4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.682609 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb24s\" (UniqueName: \"kubernetes.io/projected/d9d65d2d-d203-478c-b791-e2dd5397d0c4-kube-api-access-xb24s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d9d65d2d-d203-478c-b791-e2dd5397d0c4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.683559 4606 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d9d65d2d-d203-478c-b791-e2dd5397d0c4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.702230 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb24s\" (UniqueName: \"kubernetes.io/projected/d9d65d2d-d203-478c-b791-e2dd5397d0c4-kube-api-access-xb24s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d9d65d2d-d203-478c-b791-e2dd5397d0c4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.710414 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d9d65d2d-d203-478c-b791-e2dd5397d0c4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:12 crc kubenswrapper[4606]: I0123 18:15:12.766245 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 18:15:13 crc kubenswrapper[4606]: I0123 18:15:13.288265 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 18:15:13 crc kubenswrapper[4606]: I0123 18:15:13.946393 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d9d65d2d-d203-478c-b791-e2dd5397d0c4","Type":"ContainerStarted","Data":"d6316c4b917870c32ae61bf92720d3a9a4d1f9158f56bec317ef5a2ad337095d"} Jan 23 18:15:14 crc kubenswrapper[4606]: I0123 18:15:14.958143 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d9d65d2d-d203-478c-b791-e2dd5397d0c4","Type":"ContainerStarted","Data":"a4433acdd3ec97578a7049ffce95d15ab8afa290b238a351c2472529b2fd245e"} Jan 23 18:15:14 crc kubenswrapper[4606]: I0123 18:15:14.977308 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.61909806 podStartE2EDuration="2.97728426s" podCreationTimestamp="2026-01-23 18:15:12 +0000 UTC" firstStartedPulling="2026-01-23 18:15:13.301050037 +0000 UTC m=+4947.301242443" lastFinishedPulling="2026-01-23 18:15:14.659236237 +0000 UTC m=+4948.659428643" observedRunningTime="2026-01-23 18:15:14.969560951 +0000 UTC m=+4948.969753347" watchObservedRunningTime="2026-01-23 18:15:14.97728426 +0000 UTC m=+4948.977476666" Jan 23 18:15:49 crc kubenswrapper[4606]: I0123 18:15:49.344528 4606 scope.go:117] "RemoveContainer" containerID="92bb27ba4501158f9e1576e1259319a843cee1252f32a1814c18f942b05140ba" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.385053 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-svxgb/must-gather-7tl24"] Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.387970 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.392160 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-svxgb"/"openshift-service-ca.crt" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.392779 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-svxgb"/"kube-root-ca.crt" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.392998 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-svxgb"/"default-dockercfg-9rlsg" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.393336 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-svxgb/must-gather-7tl24"] Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.542308 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngtx\" (UniqueName: \"kubernetes.io/projected/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-kube-api-access-dngtx\") pod \"must-gather-7tl24\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.542371 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-must-gather-output\") pod \"must-gather-7tl24\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.644232 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngtx\" (UniqueName: \"kubernetes.io/projected/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-kube-api-access-dngtx\") pod \"must-gather-7tl24\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.644289 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-must-gather-output\") pod \"must-gather-7tl24\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.644816 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-must-gather-output\") pod \"must-gather-7tl24\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.665658 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngtx\" (UniqueName: \"kubernetes.io/projected/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-kube-api-access-dngtx\") pod \"must-gather-7tl24\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:55 crc kubenswrapper[4606]: I0123 18:15:55.712101 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:15:56 crc kubenswrapper[4606]: I0123 18:15:56.202122 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-svxgb/must-gather-7tl24"] Jan 23 18:15:56 crc kubenswrapper[4606]: W0123 18:15:56.207867 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65f3a5e4_2fdf_47c8_ac22_08dcf1141171.slice/crio-d2d4a31b7be1bda326ee04be026819f28dbf67c4a3625a34db30f9922cf97738 WatchSource:0}: Error finding container d2d4a31b7be1bda326ee04be026819f28dbf67c4a3625a34db30f9922cf97738: Status 404 returned error can't find the container with id d2d4a31b7be1bda326ee04be026819f28dbf67c4a3625a34db30f9922cf97738 Jan 23 18:15:56 crc kubenswrapper[4606]: I0123 18:15:56.357013 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/must-gather-7tl24" event={"ID":"65f3a5e4-2fdf-47c8-ac22-08dcf1141171","Type":"ContainerStarted","Data":"d2d4a31b7be1bda326ee04be026819f28dbf67c4a3625a34db30f9922cf97738"} Jan 23 18:16:05 crc kubenswrapper[4606]: I0123 18:16:05.455659 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/must-gather-7tl24" event={"ID":"65f3a5e4-2fdf-47c8-ac22-08dcf1141171","Type":"ContainerStarted","Data":"579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f"} Jan 23 18:16:06 crc kubenswrapper[4606]: I0123 18:16:06.471075 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/must-gather-7tl24" event={"ID":"65f3a5e4-2fdf-47c8-ac22-08dcf1141171","Type":"ContainerStarted","Data":"f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597"} Jan 23 18:16:06 crc kubenswrapper[4606]: I0123 18:16:06.505317 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-svxgb/must-gather-7tl24" podStartSLOduration=2.891812122 podStartE2EDuration="11.505292712s" podCreationTimestamp="2026-01-23 18:15:55 +0000 UTC" firstStartedPulling="2026-01-23 18:15:56.210311819 +0000 UTC m=+4990.210504225" lastFinishedPulling="2026-01-23 18:16:04.823792419 +0000 UTC m=+4998.823984815" observedRunningTime="2026-01-23 18:16:06.485029746 +0000 UTC m=+5000.485222162" watchObservedRunningTime="2026-01-23 18:16:06.505292712 +0000 UTC m=+5000.505485118" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.226396 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-svxgb/crc-debug-dp5tn"] Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.228145 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.389773 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nhjj\" (UniqueName: \"kubernetes.io/projected/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-kube-api-access-2nhjj\") pod \"crc-debug-dp5tn\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.389872 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-host\") pod \"crc-debug-dp5tn\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.492179 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nhjj\" (UniqueName: \"kubernetes.io/projected/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-kube-api-access-2nhjj\") pod \"crc-debug-dp5tn\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.492242 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-host\") pod \"crc-debug-dp5tn\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.492745 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-host\") pod \"crc-debug-dp5tn\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.527810 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nhjj\" (UniqueName: \"kubernetes.io/projected/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-kube-api-access-2nhjj\") pod \"crc-debug-dp5tn\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:12 crc kubenswrapper[4606]: I0123 18:16:12.545399 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:16:13 crc kubenswrapper[4606]: I0123 18:16:13.542803 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" event={"ID":"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703","Type":"ContainerStarted","Data":"8fdea66dc0f7b01e30a44771c1aa7df385f9514ae98dd934fbae51b6e076d8af"} Jan 23 18:16:24 crc kubenswrapper[4606]: I0123 18:16:24.657430 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" event={"ID":"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703","Type":"ContainerStarted","Data":"073fd2397714e1cfd0a896b60d82cc02c7e50761a3d756630b2aff2472dfbef6"} Jan 23 18:16:24 crc kubenswrapper[4606]: I0123 18:16:24.674802 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" podStartSLOduration=1.1158611330000001 podStartE2EDuration="12.674785226s" podCreationTimestamp="2026-01-23 18:16:12 +0000 UTC" firstStartedPulling="2026-01-23 18:16:12.585212901 +0000 UTC m=+5006.585405307" lastFinishedPulling="2026-01-23 18:16:24.144136994 +0000 UTC m=+5018.144329400" observedRunningTime="2026-01-23 18:16:24.671265411 +0000 UTC m=+5018.671457817" watchObservedRunningTime="2026-01-23 18:16:24.674785226 +0000 UTC m=+5018.674977632" Jan 23 18:16:29 crc kubenswrapper[4606]: I0123 18:16:29.934041 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:16:29 crc kubenswrapper[4606]: I0123 18:16:29.935904 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:16:59 crc kubenswrapper[4606]: I0123 18:16:59.933948 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:16:59 crc kubenswrapper[4606]: I0123 18:16:59.934455 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:12 crc kubenswrapper[4606]: I0123 18:17:12.092061 4606 generic.go:334] "Generic (PLEG): container finished" podID="e1b3d793-46a5-4ea8-997f-6dbcd2a1c703" containerID="073fd2397714e1cfd0a896b60d82cc02c7e50761a3d756630b2aff2472dfbef6" exitCode=0 Jan 23 18:17:12 crc kubenswrapper[4606]: I0123 18:17:12.092118 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" event={"ID":"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703","Type":"ContainerDied","Data":"073fd2397714e1cfd0a896b60d82cc02c7e50761a3d756630b2aff2472dfbef6"} Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.227866 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.264982 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-svxgb/crc-debug-dp5tn"] Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.278947 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-svxgb/crc-debug-dp5tn"] Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.346768 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-host\") pod \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.347066 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nhjj\" (UniqueName: \"kubernetes.io/projected/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-kube-api-access-2nhjj\") pod \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\" (UID: \"e1b3d793-46a5-4ea8-997f-6dbcd2a1c703\") " Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.346889 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-host" (OuterVolumeSpecName: "host") pod "e1b3d793-46a5-4ea8-997f-6dbcd2a1c703" (UID: "e1b3d793-46a5-4ea8-997f-6dbcd2a1c703"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.347917 4606 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.715694 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-kube-api-access-2nhjj" (OuterVolumeSpecName: "kube-api-access-2nhjj") pod "e1b3d793-46a5-4ea8-997f-6dbcd2a1c703" (UID: "e1b3d793-46a5-4ea8-997f-6dbcd2a1c703"). InnerVolumeSpecName "kube-api-access-2nhjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:17:13 crc kubenswrapper[4606]: I0123 18:17:13.756526 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nhjj\" (UniqueName: \"kubernetes.io/projected/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703-kube-api-access-2nhjj\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:14 crc kubenswrapper[4606]: I0123 18:17:14.110349 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fdea66dc0f7b01e30a44771c1aa7df385f9514ae98dd934fbae51b6e076d8af" Jan 23 18:17:14 crc kubenswrapper[4606]: I0123 18:17:14.110421 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-dp5tn" Jan 23 18:17:14 crc kubenswrapper[4606]: I0123 18:17:14.377006 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b3d793-46a5-4ea8-997f-6dbcd2a1c703" path="/var/lib/kubelet/pods/e1b3d793-46a5-4ea8-997f-6dbcd2a1c703/volumes" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.050313 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-svxgb/crc-debug-jpjzx"] Jan 23 18:17:15 crc kubenswrapper[4606]: E0123 18:17:15.051193 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b3d793-46a5-4ea8-997f-6dbcd2a1c703" containerName="container-00" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.051295 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b3d793-46a5-4ea8-997f-6dbcd2a1c703" containerName="container-00" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.053647 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b3d793-46a5-4ea8-997f-6dbcd2a1c703" containerName="container-00" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.054575 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.191974 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8pc\" (UniqueName: \"kubernetes.io/projected/02b91c5b-5b9f-406e-be41-5207400d416c-kube-api-access-nr8pc\") pod \"crc-debug-jpjzx\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.192701 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b91c5b-5b9f-406e-be41-5207400d416c-host\") pod \"crc-debug-jpjzx\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.294354 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b91c5b-5b9f-406e-be41-5207400d416c-host\") pod \"crc-debug-jpjzx\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.294558 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8pc\" (UniqueName: \"kubernetes.io/projected/02b91c5b-5b9f-406e-be41-5207400d416c-kube-api-access-nr8pc\") pod \"crc-debug-jpjzx\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.294563 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b91c5b-5b9f-406e-be41-5207400d416c-host\") pod \"crc-debug-jpjzx\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.704576 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8pc\" (UniqueName: \"kubernetes.io/projected/02b91c5b-5b9f-406e-be41-5207400d416c-kube-api-access-nr8pc\") pod \"crc-debug-jpjzx\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:15 crc kubenswrapper[4606]: I0123 18:17:15.979367 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:16 crc kubenswrapper[4606]: I0123 18:17:16.133285 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-jpjzx" event={"ID":"02b91c5b-5b9f-406e-be41-5207400d416c","Type":"ContainerStarted","Data":"a46c84bfd0aff168fbdc68e6ff89aae8793f188ffdf7ce1e8137df717d7a99be"} Jan 23 18:17:17 crc kubenswrapper[4606]: I0123 18:17:17.145221 4606 generic.go:334] "Generic (PLEG): container finished" podID="02b91c5b-5b9f-406e-be41-5207400d416c" containerID="3c396706ce880b5d07ee835929ec5618f80f78eb936eabf51e8804b9d8303b7f" exitCode=0 Jan 23 18:17:17 crc kubenswrapper[4606]: I0123 18:17:17.145326 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-jpjzx" event={"ID":"02b91c5b-5b9f-406e-be41-5207400d416c","Type":"ContainerDied","Data":"3c396706ce880b5d07ee835929ec5618f80f78eb936eabf51e8804b9d8303b7f"} Jan 23 18:17:18 crc kubenswrapper[4606]: I0123 18:17:18.283557 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:18 crc kubenswrapper[4606]: I0123 18:17:18.373233 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b91c5b-5b9f-406e-be41-5207400d416c-host\") pod \"02b91c5b-5b9f-406e-be41-5207400d416c\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " Jan 23 18:17:18 crc kubenswrapper[4606]: I0123 18:17:18.373301 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr8pc\" (UniqueName: \"kubernetes.io/projected/02b91c5b-5b9f-406e-be41-5207400d416c-kube-api-access-nr8pc\") pod \"02b91c5b-5b9f-406e-be41-5207400d416c\" (UID: \"02b91c5b-5b9f-406e-be41-5207400d416c\") " Jan 23 18:17:18 crc kubenswrapper[4606]: I0123 18:17:18.373576 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02b91c5b-5b9f-406e-be41-5207400d416c-host" (OuterVolumeSpecName: "host") pod "02b91c5b-5b9f-406e-be41-5207400d416c" (UID: "02b91c5b-5b9f-406e-be41-5207400d416c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:18 crc kubenswrapper[4606]: I0123 18:17:18.374799 4606 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02b91c5b-5b9f-406e-be41-5207400d416c-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:18 crc kubenswrapper[4606]: I0123 18:17:18.404055 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b91c5b-5b9f-406e-be41-5207400d416c-kube-api-access-nr8pc" (OuterVolumeSpecName: "kube-api-access-nr8pc") pod "02b91c5b-5b9f-406e-be41-5207400d416c" (UID: "02b91c5b-5b9f-406e-be41-5207400d416c"). InnerVolumeSpecName "kube-api-access-nr8pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:17:18 crc kubenswrapper[4606]: I0123 18:17:18.476101 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr8pc\" (UniqueName: \"kubernetes.io/projected/02b91c5b-5b9f-406e-be41-5207400d416c-kube-api-access-nr8pc\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:19 crc kubenswrapper[4606]: I0123 18:17:19.164640 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-jpjzx" event={"ID":"02b91c5b-5b9f-406e-be41-5207400d416c","Type":"ContainerDied","Data":"a46c84bfd0aff168fbdc68e6ff89aae8793f188ffdf7ce1e8137df717d7a99be"} Jan 23 18:17:19 crc kubenswrapper[4606]: I0123 18:17:19.164965 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46c84bfd0aff168fbdc68e6ff89aae8793f188ffdf7ce1e8137df717d7a99be" Jan 23 18:17:19 crc kubenswrapper[4606]: I0123 18:17:19.164770 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-jpjzx" Jan 23 18:17:19 crc kubenswrapper[4606]: I0123 18:17:19.411710 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-svxgb/crc-debug-jpjzx"] Jan 23 18:17:19 crc kubenswrapper[4606]: I0123 18:17:19.428244 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-svxgb/crc-debug-jpjzx"] Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.375755 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b91c5b-5b9f-406e-be41-5207400d416c" path="/var/lib/kubelet/pods/02b91c5b-5b9f-406e-be41-5207400d416c/volumes" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.656359 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-svxgb/crc-debug-lh2mf"] Jan 23 18:17:20 crc kubenswrapper[4606]: E0123 18:17:20.657668 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b91c5b-5b9f-406e-be41-5207400d416c" containerName="container-00" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.657696 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b91c5b-5b9f-406e-be41-5207400d416c" containerName="container-00" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.658218 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b91c5b-5b9f-406e-be41-5207400d416c" containerName="container-00" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.659281 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.815555 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b6a459e-5855-454d-92bc-e8b322584446-host\") pod \"crc-debug-lh2mf\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.815649 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54r5t\" (UniqueName: \"kubernetes.io/projected/3b6a459e-5855-454d-92bc-e8b322584446-kube-api-access-54r5t\") pod \"crc-debug-lh2mf\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.918378 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b6a459e-5855-454d-92bc-e8b322584446-host\") pod \"crc-debug-lh2mf\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.918427 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54r5t\" (UniqueName: \"kubernetes.io/projected/3b6a459e-5855-454d-92bc-e8b322584446-kube-api-access-54r5t\") pod \"crc-debug-lh2mf\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.918846 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b6a459e-5855-454d-92bc-e8b322584446-host\") pod \"crc-debug-lh2mf\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.938480 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54r5t\" (UniqueName: \"kubernetes.io/projected/3b6a459e-5855-454d-92bc-e8b322584446-kube-api-access-54r5t\") pod \"crc-debug-lh2mf\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:20 crc kubenswrapper[4606]: I0123 18:17:20.983058 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:21 crc kubenswrapper[4606]: I0123 18:17:21.187709 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-lh2mf" event={"ID":"3b6a459e-5855-454d-92bc-e8b322584446","Type":"ContainerStarted","Data":"ee4052af3a0101d0114802b164f4df570c26e1d82f3e2393c3003c1dbbfd0d4a"} Jan 23 18:17:22 crc kubenswrapper[4606]: I0123 18:17:22.198320 4606 generic.go:334] "Generic (PLEG): container finished" podID="3b6a459e-5855-454d-92bc-e8b322584446" containerID="b124f9efa622d5c86fe119a93b75efd6b9ff032a5aa22836bbd4b9ca19be22ae" exitCode=0 Jan 23 18:17:22 crc kubenswrapper[4606]: I0123 18:17:22.198402 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/crc-debug-lh2mf" event={"ID":"3b6a459e-5855-454d-92bc-e8b322584446","Type":"ContainerDied","Data":"b124f9efa622d5c86fe119a93b75efd6b9ff032a5aa22836bbd4b9ca19be22ae"} Jan 23 18:17:22 crc kubenswrapper[4606]: I0123 18:17:22.238507 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-svxgb/crc-debug-lh2mf"] Jan 23 18:17:22 crc kubenswrapper[4606]: I0123 18:17:22.247641 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-svxgb/crc-debug-lh2mf"] Jan 23 18:17:23 crc kubenswrapper[4606]: I0123 18:17:23.775233 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:23 crc kubenswrapper[4606]: I0123 18:17:23.875358 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54r5t\" (UniqueName: \"kubernetes.io/projected/3b6a459e-5855-454d-92bc-e8b322584446-kube-api-access-54r5t\") pod \"3b6a459e-5855-454d-92bc-e8b322584446\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " Jan 23 18:17:23 crc kubenswrapper[4606]: I0123 18:17:23.875867 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b6a459e-5855-454d-92bc-e8b322584446-host\") pod \"3b6a459e-5855-454d-92bc-e8b322584446\" (UID: \"3b6a459e-5855-454d-92bc-e8b322584446\") " Jan 23 18:17:23 crc kubenswrapper[4606]: I0123 18:17:23.875965 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b6a459e-5855-454d-92bc-e8b322584446-host" (OuterVolumeSpecName: "host") pod "3b6a459e-5855-454d-92bc-e8b322584446" (UID: "3b6a459e-5855-454d-92bc-e8b322584446"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:23 crc kubenswrapper[4606]: I0123 18:17:23.876681 4606 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b6a459e-5855-454d-92bc-e8b322584446-host\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:23 crc kubenswrapper[4606]: I0123 18:17:23.882513 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6a459e-5855-454d-92bc-e8b322584446-kube-api-access-54r5t" (OuterVolumeSpecName: "kube-api-access-54r5t") pod "3b6a459e-5855-454d-92bc-e8b322584446" (UID: "3b6a459e-5855-454d-92bc-e8b322584446"). InnerVolumeSpecName "kube-api-access-54r5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:17:23 crc kubenswrapper[4606]: I0123 18:17:23.979038 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54r5t\" (UniqueName: \"kubernetes.io/projected/3b6a459e-5855-454d-92bc-e8b322584446-kube-api-access-54r5t\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:24 crc kubenswrapper[4606]: I0123 18:17:24.219319 4606 scope.go:117] "RemoveContainer" containerID="b124f9efa622d5c86fe119a93b75efd6b9ff032a5aa22836bbd4b9ca19be22ae" Jan 23 18:17:24 crc kubenswrapper[4606]: I0123 18:17:24.219371 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/crc-debug-lh2mf" Jan 23 18:17:24 crc kubenswrapper[4606]: I0123 18:17:24.376378 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b6a459e-5855-454d-92bc-e8b322584446" path="/var/lib/kubelet/pods/3b6a459e-5855-454d-92bc-e8b322584446/volumes" Jan 23 18:17:29 crc kubenswrapper[4606]: I0123 18:17:29.933857 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:17:29 crc kubenswrapper[4606]: I0123 18:17:29.934491 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:29 crc kubenswrapper[4606]: I0123 18:17:29.934545 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 18:17:29 crc kubenswrapper[4606]: I0123 18:17:29.935449 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e20276909d063d72910761ea3cff4cdf7109a1b4f91e48afefdf67b8397aeae"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:17:29 crc kubenswrapper[4606]: I0123 18:17:29.935509 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://3e20276909d063d72910761ea3cff4cdf7109a1b4f91e48afefdf67b8397aeae" gracePeriod=600 Jan 23 18:17:30 crc kubenswrapper[4606]: I0123 18:17:30.277022 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="3e20276909d063d72910761ea3cff4cdf7109a1b4f91e48afefdf67b8397aeae" exitCode=0 Jan 23 18:17:30 crc kubenswrapper[4606]: I0123 18:17:30.277338 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"3e20276909d063d72910761ea3cff4cdf7109a1b4f91e48afefdf67b8397aeae"} Jan 23 18:17:30 crc kubenswrapper[4606]: I0123 18:17:30.277370 4606 scope.go:117] "RemoveContainer" containerID="7d5ed2dce3f047dc64cc6f330d1b9ddcafe7e204485b5912e1b7d34cab031d5d" Jan 23 18:17:31 crc kubenswrapper[4606]: I0123 18:17:31.287146 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c"} Jan 23 18:18:02 crc kubenswrapper[4606]: I0123 18:18:02.286683 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d4bdcdf98-bmdhd_f71cd544-6079-402d-a59f-0a3d4fb1d1f7/barbican-api/0.log" Jan 23 18:18:02 crc kubenswrapper[4606]: I0123 18:18:02.512950 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d4bdcdf98-bmdhd_f71cd544-6079-402d-a59f-0a3d4fb1d1f7/barbican-api-log/0.log" Jan 23 18:18:02 crc kubenswrapper[4606]: I0123 18:18:02.659456 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5ccb8fb66b-52f57_efd7225f-f2b9-4e1c-81ae-eeb9617963a2/barbican-keystone-listener/0.log" Jan 23 18:18:02 crc kubenswrapper[4606]: I0123 18:18:02.832063 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d46dd5979-gmzrf_40720f96-b314-4ba9-8f88-b8b4c33ccb5e/barbican-worker/0.log" Jan 23 18:18:02 crc kubenswrapper[4606]: I0123 18:18:02.844244 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5ccb8fb66b-52f57_efd7225f-f2b9-4e1c-81ae-eeb9617963a2/barbican-keystone-listener-log/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.052197 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6d46dd5979-gmzrf_40720f96-b314-4ba9-8f88-b8b4c33ccb5e/barbican-worker-log/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.218620 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ftsjh_128354f4-3952-483b-b36b-0e21d1462038/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.367122 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c203fb6-d209-47b4-814f-a98707421288/ceilometer-central-agent/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.482773 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c203fb6-d209-47b4-814f-a98707421288/proxy-httpd/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.517466 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c203fb6-d209-47b4-814f-a98707421288/ceilometer-notification-agent/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.535737 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7c203fb6-d209-47b4-814f-a98707421288/sg-core/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.732139 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b26a0ac0-2f7e-48a0-aebd-20f7388d0634/cinder-api/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.778502 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b26a0ac0-2f7e-48a0-aebd-20f7388d0634/cinder-api-log/0.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.908840 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a43fb635-59de-4576-bdf4-9eacebc80a85/cinder-scheduler/1.log" Jan 23 18:18:03 crc kubenswrapper[4606]: I0123 18:18:03.954066 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a43fb635-59de-4576-bdf4-9eacebc80a85/cinder-scheduler/0.log" Jan 23 18:18:04 crc kubenswrapper[4606]: I0123 18:18:04.055577 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a43fb635-59de-4576-bdf4-9eacebc80a85/probe/0.log" Jan 23 18:18:04 crc kubenswrapper[4606]: I0123 18:18:04.144843 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-nnzrn_f7c7e859-304f-4f42-b29a-968644679668/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:04 crc kubenswrapper[4606]: I0123 18:18:04.313844 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mrds6_2a00bdda-f978-4909-8408-2a7e91fe2927/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:04 crc kubenswrapper[4606]: I0123 18:18:04.459002 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d949ffffc-w6rtg_83c23281-64ab-4274-9278-bdebed7cba7d/init/0.log" Jan 23 18:18:04 crc kubenswrapper[4606]: I0123 18:18:04.583347 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d949ffffc-w6rtg_83c23281-64ab-4274-9278-bdebed7cba7d/init/0.log" Jan 23 18:18:04 crc kubenswrapper[4606]: I0123 18:18:04.752830 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wpbtf_c268f2de-72d2-457d-b7e4-192af4f43fb8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:04 crc kubenswrapper[4606]: I0123 18:18:04.849555 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d949ffffc-w6rtg_83c23281-64ab-4274-9278-bdebed7cba7d/dnsmasq-dns/0.log" Jan 23 18:18:05 crc kubenswrapper[4606]: I0123 18:18:05.007685 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f8318015-9694-4f24-a509-141912db7eb9/glance-httpd/0.log" Jan 23 18:18:05 crc kubenswrapper[4606]: I0123 18:18:05.015924 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f8318015-9694-4f24-a509-141912db7eb9/glance-log/0.log" Jan 23 18:18:05 crc kubenswrapper[4606]: I0123 18:18:05.162849 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_04e154c1-39df-4c97-b8f7-32591d612abd/glance-httpd/0.log" Jan 23 18:18:05 crc kubenswrapper[4606]: I0123 18:18:05.318165 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_04e154c1-39df-4c97-b8f7-32591d612abd/glance-log/0.log" Jan 23 18:18:05 crc kubenswrapper[4606]: I0123 18:18:05.359130 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977896f6-qrk2s_59fc19de-8c69-43e4-bf0d-1cb8ea9a9120/horizon/1.log" Jan 23 18:18:05 crc kubenswrapper[4606]: I0123 18:18:05.853555 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-lgmvv_3aa4047d-cfe6-4348-8a11-c3688c8596b4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:05 crc kubenswrapper[4606]: I0123 18:18:05.930423 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977896f6-qrk2s_59fc19de-8c69-43e4-bf0d-1cb8ea9a9120/horizon/0.log" Jan 23 18:18:06 crc kubenswrapper[4606]: I0123 18:18:06.246012 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8l2kh_5a624f2d-7f1c-4a1d-b24c-280c35dc69d5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:06 crc kubenswrapper[4606]: I0123 18:18:06.606837 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486521-pdkkp_15d6fa54-20e5-469e-a9c9-cbd2352ca508/keystone-cron/0.log" Jan 23 18:18:06 crc kubenswrapper[4606]: I0123 18:18:06.708230 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57977896f6-qrk2s_59fc19de-8c69-43e4-bf0d-1cb8ea9a9120/horizon-log/0.log" Jan 23 18:18:06 crc kubenswrapper[4606]: I0123 18:18:06.758456 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_2124b3c9-bd82-4e02-980c-b3de98193833/kube-state-metrics/0.log" Jan 23 18:18:07 crc kubenswrapper[4606]: I0123 18:18:07.494756 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-wq7jm_59e9c932-3b43-4d74-b776-c34cfe6b2a78/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:08 crc kubenswrapper[4606]: I0123 18:18:08.187776 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r9wgf_0b22e008-ecea-4e70-81cf-a7165562fd7e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:08 crc kubenswrapper[4606]: I0123 18:18:08.297105 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6ffcd54777-hkxkz_7a926781-d4e8-4d2a-b8fd-72d4d6273927/neutron-httpd/0.log" Jan 23 18:18:08 crc kubenswrapper[4606]: I0123 18:18:08.566360 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6ffcd54777-hkxkz_7a926781-d4e8-4d2a-b8fd-72d4d6273927/neutron-api/0.log" Jan 23 18:18:09 crc kubenswrapper[4606]: I0123 18:18:09.689386 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_17c593cf-ada1-4111-9b42-4e645428aeff/nova-cell0-conductor-conductor/0.log" Jan 23 18:18:10 crc kubenswrapper[4606]: I0123 18:18:10.613049 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_6200263b-ab5c-41a9-a8cf-d44d8e6db987/nova-cell1-conductor-conductor/0.log" Jan 23 18:18:11 crc kubenswrapper[4606]: I0123 18:18:11.023045 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e390ccaf-fe1a-4fd0-b19f-64d5d43b2005/nova-api-log/0.log" Jan 23 18:18:11 crc kubenswrapper[4606]: I0123 18:18:11.273981 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8920b95e-6d88-4017-bb76-79677d2bc880/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 18:18:11 crc kubenswrapper[4606]: I0123 18:18:11.594711 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e390ccaf-fe1a-4fd0-b19f-64d5d43b2005/nova-api-api/0.log" Jan 23 18:18:11 crc kubenswrapper[4606]: I0123 18:18:11.601119 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-67r74_56db46d4-844a-44eb-b09e-4512ce955b3f/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:11 crc kubenswrapper[4606]: I0123 18:18:11.861658 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-57c54b4584-qk24c_9127ed79-5478-464a-9e71-6026bc46e83a/keystone-api/0.log" Jan 23 18:18:11 crc kubenswrapper[4606]: I0123 18:18:11.870250 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l4ls2_29ce98a4-4d13-42c9-84f4-b77dd0b80bf5/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:12 crc kubenswrapper[4606]: I0123 18:18:12.064826 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e591e35b-989c-4122-bd2b-7e732c294b72/nova-metadata-log/0.log" Jan 23 18:18:12 crc kubenswrapper[4606]: I0123 18:18:12.620405 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030/mysql-bootstrap/0.log" Jan 23 18:18:12 crc kubenswrapper[4606]: I0123 18:18:12.934969 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030/mysql-bootstrap/0.log" Jan 23 18:18:12 crc kubenswrapper[4606]: I0123 18:18:12.946992 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8c963c6-bbae-4e9b-9c6e-7bd0fbf10030/galera/0.log" Jan 23 18:18:12 crc kubenswrapper[4606]: I0123 18:18:12.963958 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6a99d9eb-47df-471a-80e7-ae2b9e5ef75d/nova-scheduler-scheduler/0.log" Jan 23 18:18:13 crc kubenswrapper[4606]: I0123 18:18:13.243653 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_684acded-8efb-47ea-8bb4-0212874a6826/mysql-bootstrap/0.log" Jan 23 18:18:13 crc kubenswrapper[4606]: I0123 18:18:13.537721 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_684acded-8efb-47ea-8bb4-0212874a6826/galera/0.log" Jan 23 18:18:13 crc kubenswrapper[4606]: I0123 18:18:13.542462 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_684acded-8efb-47ea-8bb4-0212874a6826/mysql-bootstrap/0.log" Jan 23 18:18:13 crc kubenswrapper[4606]: I0123 18:18:13.623802 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_684acded-8efb-47ea-8bb4-0212874a6826/galera/1.log" Jan 23 18:18:13 crc kubenswrapper[4606]: I0123 18:18:13.816035 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_83d4b73c-dbe9-4e0a-96b7-a5c7f5b16c6e/openstackclient/0.log" Jan 23 18:18:14 crc kubenswrapper[4606]: I0123 18:18:14.128425 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-5sjkh_7063b991-5662-4335-9e9e-73f057367708/ovn-controller/0.log" Jan 23 18:18:14 crc kubenswrapper[4606]: I0123 18:18:14.162749 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-grzgc_bc6dd1de-854d-4048-84de-c1a639ec37fe/openstack-network-exporter/0.log" Jan 23 18:18:14 crc kubenswrapper[4606]: I0123 18:18:14.393914 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e591e35b-989c-4122-bd2b-7e732c294b72/nova-metadata-metadata/0.log" Jan 23 18:18:14 crc kubenswrapper[4606]: I0123 18:18:14.405979 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmz28_222efa1d-6b61-49a8-8b6e-ce7276718806/ovsdb-server-init/0.log" Jan 23 18:18:14 crc kubenswrapper[4606]: I0123 18:18:14.703124 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmz28_222efa1d-6b61-49a8-8b6e-ce7276718806/ovs-vswitchd/0.log" Jan 23 18:18:14 crc kubenswrapper[4606]: I0123 18:18:14.769933 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmz28_222efa1d-6b61-49a8-8b6e-ce7276718806/ovsdb-server/0.log" Jan 23 18:18:14 crc kubenswrapper[4606]: I0123 18:18:14.806422 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tmz28_222efa1d-6b61-49a8-8b6e-ce7276718806/ovsdb-server-init/0.log" Jan 23 18:18:15 crc kubenswrapper[4606]: I0123 18:18:15.034399 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-c9969_638511df-28ff-4ac7-9e76-7b3ca9afb52c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:15 crc kubenswrapper[4606]: I0123 18:18:15.113673 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_965dca6f-f73d-4ee0-a511-91793448cc67/openstack-network-exporter/0.log" Jan 23 18:18:15 crc kubenswrapper[4606]: I0123 18:18:15.133453 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_965dca6f-f73d-4ee0-a511-91793448cc67/ovn-northd/0.log" Jan 23 18:18:15 crc kubenswrapper[4606]: I0123 18:18:15.406674 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0221a06d-cf70-4013-8ba4-41d8e7515bb3/openstack-network-exporter/0.log" Jan 23 18:18:15 crc kubenswrapper[4606]: I0123 18:18:15.507478 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0221a06d-cf70-4013-8ba4-41d8e7515bb3/ovsdbserver-nb/0.log" Jan 23 18:18:15 crc kubenswrapper[4606]: I0123 18:18:15.653617 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_14568d10-e2cf-4f22-9c72-62924ba927e8/openstack-network-exporter/0.log" Jan 23 18:18:16 crc kubenswrapper[4606]: I0123 18:18:16.463126 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_14568d10-e2cf-4f22-9c72-62924ba927e8/ovsdbserver-sb/0.log" Jan 23 18:18:16 crc kubenswrapper[4606]: I0123 18:18:16.703064 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_008d9c33-291f-4081-b30f-eb32178efcd0/setup-container/0.log" Jan 23 18:18:16 crc kubenswrapper[4606]: I0123 18:18:16.750123 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-876fbb8bd-6zlfb_b498e8d0-f6bd-474c-8722-c0ccf11188bf/placement-api/0.log" Jan 23 18:18:16 crc kubenswrapper[4606]: I0123 18:18:16.894898 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-876fbb8bd-6zlfb_b498e8d0-f6bd-474c-8722-c0ccf11188bf/placement-log/0.log" Jan 23 18:18:16 crc kubenswrapper[4606]: I0123 18:18:16.991082 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_008d9c33-291f-4081-b30f-eb32178efcd0/setup-container/0.log" Jan 23 18:18:17 crc kubenswrapper[4606]: I0123 18:18:17.061253 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_008d9c33-291f-4081-b30f-eb32178efcd0/rabbitmq/0.log" Jan 23 18:18:17 crc kubenswrapper[4606]: I0123 18:18:17.273906 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_486d7661-6513-4b14-9367-9d4ddb8e6d2c/setup-container/0.log" Jan 23 18:18:17 crc kubenswrapper[4606]: I0123 18:18:17.447629 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_486d7661-6513-4b14-9367-9d4ddb8e6d2c/rabbitmq/0.log" Jan 23 18:18:17 crc kubenswrapper[4606]: I0123 18:18:17.463344 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_486d7661-6513-4b14-9367-9d4ddb8e6d2c/setup-container/0.log" Jan 23 18:18:18 crc kubenswrapper[4606]: I0123 18:18:18.072877 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-twr6s_c1b5063e-ec21-4a81-b0d1-af09f9ac9019/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:18 crc kubenswrapper[4606]: I0123 18:18:18.123908 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mv75m_7f98f049-04dc-4e56-9351-da168167af83/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:18 crc kubenswrapper[4606]: I0123 18:18:18.330660 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6vn9f_d8804eac-7ae1-4c53-898e-858547a88346/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:18 crc kubenswrapper[4606]: I0123 18:18:18.505236 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-trdvn_2c1f814f-8269-4be1-9730-4df95c3eb31b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:18 crc kubenswrapper[4606]: I0123 18:18:18.738299 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cwhrv_d3a48b8b-1205-4128-a038-65263806b2be/ssh-known-hosts-edpm-deployment/0.log" Jan 23 18:18:18 crc kubenswrapper[4606]: I0123 18:18:18.892419 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6786d7bbc5-25lfl_10c3e688-60f4-483e-962b-9e46bd25cfb7/proxy-server/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.090548 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5h5kn_f26de21c-1da5-42d4-9dcd-0e6415109f94/swift-ring-rebalance/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.092852 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6786d7bbc5-25lfl_10c3e688-60f4-483e-962b-9e46bd25cfb7/proxy-httpd/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.277530 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/account-auditor/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.324198 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/account-reaper/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.464502 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/account-replicator/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.586470 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/account-server/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.603022 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/container-auditor/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.685378 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/container-replicator/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.841269 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/container-updater/0.log" Jan 23 18:18:19 crc kubenswrapper[4606]: I0123 18:18:19.879099 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/container-server/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.069175 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/object-expirer/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.122532 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/object-auditor/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.149809 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/object-replicator/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.341640 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/object-server/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.352364 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/object-updater/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.550147 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/rsync/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.576385 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_06f54b44-852a-4700-ac26-e9d89a545f28/swift-recon-cron/0.log" Jan 23 18:18:20 crc kubenswrapper[4606]: I0123 18:18:20.761008 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-xv982_c3f0429a-76a3-4da7-9b22-f04d6e4e1cc9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:21 crc kubenswrapper[4606]: I0123 18:18:21.079784 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d9d65d2d-d203-478c-b791-e2dd5397d0c4/test-operator-logs-container/0.log" Jan 23 18:18:21 crc kubenswrapper[4606]: I0123 18:18:21.271077 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8x6z9_7b50ff61-a29f-498a-9a41-294c3ad27ba5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 18:18:21 crc kubenswrapper[4606]: I0123 18:18:21.514198 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9c7b5ed0-95db-4e9a-b824-a6dc9596dd71/tempest-tests-tempest-tests-runner/0.log" Jan 23 18:18:26 crc kubenswrapper[4606]: I0123 18:18:26.446428 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c5ffebdb-76b6-40da-b0e0-b48d596fdea8/memcached/0.log" Jan 23 18:18:50 crc kubenswrapper[4606]: I0123 18:18:50.552725 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4_a98a86a4-3b6c-4d97-813e-bedfb701042d/util/0.log" Jan 23 18:18:50 crc kubenswrapper[4606]: I0123 18:18:50.711175 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4_a98a86a4-3b6c-4d97-813e-bedfb701042d/util/0.log" Jan 23 18:18:50 crc kubenswrapper[4606]: I0123 18:18:50.757028 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4_a98a86a4-3b6c-4d97-813e-bedfb701042d/pull/0.log" Jan 23 18:18:50 crc kubenswrapper[4606]: I0123 18:18:50.802424 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4_a98a86a4-3b6c-4d97-813e-bedfb701042d/pull/0.log" Jan 23 18:18:51 crc kubenswrapper[4606]: I0123 18:18:51.034686 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4_a98a86a4-3b6c-4d97-813e-bedfb701042d/pull/0.log" Jan 23 18:18:51 crc kubenswrapper[4606]: I0123 18:18:51.037627 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4_a98a86a4-3b6c-4d97-813e-bedfb701042d/extract/0.log" Jan 23 18:18:51 crc kubenswrapper[4606]: I0123 18:18:51.064263 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_286c74fe9c8d1acee3d9fc7ac874072f3ec7a351832f684c42db5d0a89qwhv4_a98a86a4-3b6c-4d97-813e-bedfb701042d/util/0.log" Jan 23 18:18:51 crc kubenswrapper[4606]: I0123 18:18:51.624968 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-7c96f_db44bfcf-35b0-4cd1-baef-f9e1879e863d/manager/1.log" Jan 23 18:18:51 crc kubenswrapper[4606]: I0123 18:18:51.729109 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-7c96f_db44bfcf-35b0-4cd1-baef-f9e1879e863d/manager/0.log" Jan 23 18:18:51 crc kubenswrapper[4606]: I0123 18:18:51.802790 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-f6w2s_6ab09cf8-68a3-4fac-aa7b-824a2626b3db/manager/1.log" Jan 23 18:18:51 crc kubenswrapper[4606]: I0123 18:18:51.969156 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-f6w2s_6ab09cf8-68a3-4fac-aa7b-824a2626b3db/manager/0.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.084445 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-wbg8x_9ca388c9-7c19-4efb-93da-2d889c972fa7/manager/1.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.109096 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-wbg8x_9ca388c9-7c19-4efb-93da-2d889c972fa7/manager/0.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.242383 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-qxt8l_c58688d9-32d0-4a57-a216-1110af3dd7dc/manager/1.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.348380 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-qxt8l_c58688d9-32d0-4a57-a216-1110af3dd7dc/manager/0.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.403841 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gzdtf_9b69bbaa-8277-48b5-b0aa-f8d7b7fec647/manager/1.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.482978 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gzdtf_9b69bbaa-8277-48b5-b0aa-f8d7b7fec647/manager/0.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.631689 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6hgqs_7e33ab4c-7c48-478a-ae66-119b3ba39b19/manager/1.log" Jan 23 18:18:52 crc kubenswrapper[4606]: I0123 18:18:52.680781 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6hgqs_7e33ab4c-7c48-478a-ae66-119b3ba39b19/manager/0.log" Jan 23 18:18:53 crc kubenswrapper[4606]: I0123 18:18:53.231416 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-jmspw_194942ca-b627-42a1-9491-9e00b44d51d7/manager/1.log" Jan 23 18:18:53 crc kubenswrapper[4606]: I0123 18:18:53.279958 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-6lkpf_7671c11a-cd18-4409-addc-caeb99b22cae/manager/1.log" Jan 23 18:18:53 crc kubenswrapper[4606]: I0123 18:18:53.547830 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-6lkpf_7671c11a-cd18-4409-addc-caeb99b22cae/manager/0.log" Jan 23 18:18:53 crc kubenswrapper[4606]: I0123 18:18:53.550045 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-jmspw_194942ca-b627-42a1-9491-9e00b44d51d7/manager/0.log" Jan 23 18:18:53 crc kubenswrapper[4606]: I0123 18:18:53.675370 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-lc2n5_5cfbb45f-ca05-4e78-8392-b905640374fd/manager/1.log" Jan 23 18:18:53 crc kubenswrapper[4606]: I0123 18:18:53.838424 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-lc2n5_5cfbb45f-ca05-4e78-8392-b905640374fd/manager/0.log" Jan 23 18:18:53 crc kubenswrapper[4606]: I0123 18:18:53.997521 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-qn8hz_7215e9f7-67ae-40ad-befc-e411c5acb586/manager/1.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.002447 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-qn8hz_7215e9f7-67ae-40ad-befc-e411c5acb586/manager/0.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.082580 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-lz99v_29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8/manager/1.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.236527 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-lz99v_29bd6d8a-dcc4-46b9-b735-d38e9d7a75a8/manager/0.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.259381 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sf5dg_23ed80ea-dfab-4552-86d1-972563d8d4b6/manager/1.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.382347 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sf5dg_23ed80ea-dfab-4552-86d1-972563d8d4b6/manager/0.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.569115 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-dv22m_9ae13310-86a4-4a3f-9967-cb88d1bfa536/manager/1.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.636654 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-dv22m_9ae13310-86a4-4a3f-9967-cb88d1bfa536/manager/0.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.748045 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-kngd9_3112e8c9-b0cb-4d5e-bba9-1caf429b45dc/manager/1.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.886436 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-kngd9_3112e8c9-b0cb-4d5e-bba9-1caf429b45dc/manager/0.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.912931 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx_62e61905-d16f-41cc-b972-f788ced5c022/manager/1.log" Jan 23 18:18:54 crc kubenswrapper[4606]: I0123 18:18:54.990714 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854k5qcx_62e61905-d16f-41cc-b972-f788ced5c022/manager/0.log" Jan 23 18:18:55 crc kubenswrapper[4606]: I0123 18:18:55.107900 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8f4d99f8c-6x2sb_1f333c20-09e6-410a-88d1-b2e6bfc73768/operator/1.log" Jan 23 18:18:55 crc kubenswrapper[4606]: I0123 18:18:55.256894 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8f4d99f8c-6x2sb_1f333c20-09e6-410a-88d1-b2e6bfc73768/operator/0.log" Jan 23 18:18:55 crc kubenswrapper[4606]: I0123 18:18:55.352554 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86756cb559-lfhdj_4d18bd93-7560-4405-9738-38876237699e/manager/1.log" Jan 23 18:18:55 crc kubenswrapper[4606]: I0123 18:18:55.499719 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tmgzc_9ebd0588-56a8-4c9b-a902-5e1cdbac74c1/registry-server/0.log" Jan 23 18:18:55 crc kubenswrapper[4606]: I0123 18:18:55.574984 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ftzlw_0ba239ef-03fb-4744-95cb-5aea8d9b0663/manager/1.log" Jan 23 18:18:55 crc kubenswrapper[4606]: I0123 18:18:55.862387 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-5bmr6_566e35f9-5c83-4267-bcf6-898458e0a722/manager/1.log" Jan 23 18:18:55 crc kubenswrapper[4606]: I0123 18:18:55.895699 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ftzlw_0ba239ef-03fb-4744-95cb-5aea8d9b0663/manager/0.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.067112 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-5bmr6_566e35f9-5c83-4267-bcf6-898458e0a722/manager/0.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.217656 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tbffj_f8394efb-8969-47c9-98ec-b9845c0c8583/operator/1.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.294654 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tbffj_f8394efb-8969-47c9-98ec-b9845c0c8583/operator/0.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.497008 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-ttmg4_d040f5d8-e2a8-4171-9f70-797e87f8af96/manager/1.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.526417 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86756cb559-lfhdj_4d18bd93-7560-4405-9738-38876237699e/manager/0.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.532811 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-ttmg4_d040f5d8-e2a8-4171-9f70-797e87f8af96/manager/0.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.719783 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-h97mh_cb8c2204-661e-4daf-b748-c329e81e04a8/manager/1.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.752771 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-h97mh_cb8c2204-661e-4daf-b748-c329e81e04a8/manager/0.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.866931 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2z4jl_14094ea9-8be6-405b-a990-9bdbda2a50cb/manager/1.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.903454 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2z4jl_14094ea9-8be6-405b-a990-9bdbda2a50cb/manager/0.log" Jan 23 18:18:56 crc kubenswrapper[4606]: I0123 18:18:56.983567 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-dnlq6_ae465942-87ad-4a76-8ead-5367413aa9a1/manager/1.log" Jan 23 18:18:57 crc kubenswrapper[4606]: I0123 18:18:57.092690 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-dnlq6_ae465942-87ad-4a76-8ead-5367413aa9a1/manager/0.log" Jan 23 18:19:18 crc kubenswrapper[4606]: I0123 18:19:18.204723 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pdwg7_b5f2f224-7031-4b81-ba0c-33e9de7311f0/control-plane-machine-set-operator/0.log" Jan 23 18:19:18 crc kubenswrapper[4606]: I0123 18:19:18.360267 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2b7xg_eecab59c-60d2-4a32-955c-dbf919399422/kube-rbac-proxy/0.log" Jan 23 18:19:18 crc kubenswrapper[4606]: I0123 18:19:18.415259 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2b7xg_eecab59c-60d2-4a32-955c-dbf919399422/machine-api-operator/0.log" Jan 23 18:19:32 crc kubenswrapper[4606]: I0123 18:19:32.094207 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-69xjr_746ecbf9-06c9-43d1-91fe-2269b27278bb/cert-manager-controller/1.log" Jan 23 18:19:32 crc kubenswrapper[4606]: I0123 18:19:32.209225 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-69xjr_746ecbf9-06c9-43d1-91fe-2269b27278bb/cert-manager-controller/0.log" Jan 23 18:19:32 crc kubenswrapper[4606]: I0123 18:19:32.381048 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nxh4d_746ca208-2336-44fb-b3e3-be8fa94e33eb/cert-manager-cainjector/1.log" Jan 23 18:19:32 crc kubenswrapper[4606]: I0123 18:19:32.420746 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nxh4d_746ca208-2336-44fb-b3e3-be8fa94e33eb/cert-manager-cainjector/0.log" Jan 23 18:19:32 crc kubenswrapper[4606]: I0123 18:19:32.528845 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-z99zq_0a2bd774-59e0-478d-b420-4b2017f3f6de/cert-manager-webhook/0.log" Jan 23 18:19:44 crc kubenswrapper[4606]: I0123 18:19:44.744993 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zx7nw_3767343e-b558-43fb-a813-4ddac7e562a6/nmstate-console-plugin/0.log" Jan 23 18:19:44 crc kubenswrapper[4606]: I0123 18:19:44.891234 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-n8gtl_e53998dd-17ef-4f3f-989b-2304bc9357aa/nmstate-handler/0.log" Jan 23 18:19:44 crc kubenswrapper[4606]: I0123 18:19:44.969036 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ksm95_ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8/kube-rbac-proxy/0.log" Jan 23 18:19:45 crc kubenswrapper[4606]: I0123 18:19:45.016019 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ksm95_ac289554-15c0-4b1d-a8f9-7e1e1ccb93b8/nmstate-metrics/0.log" Jan 23 18:19:45 crc kubenswrapper[4606]: I0123 18:19:45.193217 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pljkp_880e0d16-fda7-42d7-8a25-423c4b200c8d/nmstate-operator/0.log" Jan 23 18:19:45 crc kubenswrapper[4606]: I0123 18:19:45.248752 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lctvz_3cc45769-a4ca-4c0e-aafd-138134ecd07e/nmstate-webhook/0.log" Jan 23 18:19:51 crc kubenswrapper[4606]: E0123 18:19:51.387756 4606 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.022s" Jan 23 18:19:59 crc kubenswrapper[4606]: I0123 18:19:59.933705 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:19:59 crc kubenswrapper[4606]: I0123 18:19:59.934208 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.062388 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p2mvr_23f67545-2df3-4dd2-8a6f-6de84ddf3a20/kube-rbac-proxy/0.log" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.279168 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-p2mvr_23f67545-2df3-4dd2-8a6f-6de84ddf3a20/controller/0.log" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.428562 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-frr-files/0.log" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.555629 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-frr-files/0.log" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.560136 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-reloader/0.log" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.725395 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-metrics/0.log" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.741657 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-reloader/0.log" Jan 23 18:20:17 crc kubenswrapper[4606]: I0123 18:20:17.923412 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-frr-files/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.031492 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-reloader/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.056844 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-metrics/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.071948 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-metrics/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.493700 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-frr-files/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.539560 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-metrics/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.550157 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/cp-reloader/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.586635 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/controller/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.929905 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/kube-rbac-proxy/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.938504 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/frr-metrics/0.log" Jan 23 18:20:18 crc kubenswrapper[4606]: I0123 18:20:18.957245 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/kube-rbac-proxy-frr/0.log" Jan 23 18:20:19 crc kubenswrapper[4606]: I0123 18:20:19.333766 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/reloader/0.log" Jan 23 18:20:19 crc kubenswrapper[4606]: I0123 18:20:19.355093 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wd7jn_00160898-cf32-4d05-8690-3dc95b50ca0e/frr-k8s-webhook-server/0.log" Jan 23 18:20:19 crc kubenswrapper[4606]: I0123 18:20:19.720468 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-779944bfcf-5nq4l_96e5558f-180a-4f7a-aee6-213fb576e016/manager/1.log" Jan 23 18:20:19 crc kubenswrapper[4606]: I0123 18:20:19.760326 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-779944bfcf-5nq4l_96e5558f-180a-4f7a-aee6-213fb576e016/manager/0.log" Jan 23 18:20:20 crc kubenswrapper[4606]: I0123 18:20:20.061418 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5fd8f4cf46-r8plx_7e92972f-9a8b-45b1-9a65-fb36b7467a66/webhook-server/0.log" Jan 23 18:20:20 crc kubenswrapper[4606]: I0123 18:20:20.208773 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-v8pdx_5a200ea3-85d2-4098-b6b9-33def10aaee1/frr/0.log" Jan 23 18:20:20 crc kubenswrapper[4606]: I0123 18:20:20.361388 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dw82b_568d0f37-d19e-472e-bc6c-9f6471b5edd6/kube-rbac-proxy/0.log" Jan 23 18:20:20 crc kubenswrapper[4606]: I0123 18:20:20.720209 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dw82b_568d0f37-d19e-472e-bc6c-9f6471b5edd6/speaker/0.log" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.574798 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-96dv6"] Jan 23 18:20:27 crc kubenswrapper[4606]: E0123 18:20:27.578213 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b6a459e-5855-454d-92bc-e8b322584446" containerName="container-00" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.583271 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b6a459e-5855-454d-92bc-e8b322584446" containerName="container-00" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.583830 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b6a459e-5855-454d-92bc-e8b322584446" containerName="container-00" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.585691 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.595196 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96dv6"] Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.679069 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-utilities\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.679293 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-catalog-content\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.679433 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xw8m\" (UniqueName: \"kubernetes.io/projected/1b560841-eda2-47a9-a6c5-85e76c078c3f-kube-api-access-7xw8m\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.780869 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xw8m\" (UniqueName: \"kubernetes.io/projected/1b560841-eda2-47a9-a6c5-85e76c078c3f-kube-api-access-7xw8m\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.781262 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-utilities\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.781379 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-catalog-content\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.781842 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-utilities\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.781915 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-catalog-content\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.802051 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xw8m\" (UniqueName: \"kubernetes.io/projected/1b560841-eda2-47a9-a6c5-85e76c078c3f-kube-api-access-7xw8m\") pod \"redhat-marketplace-96dv6\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:27 crc kubenswrapper[4606]: I0123 18:20:27.921811 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:28 crc kubenswrapper[4606]: I0123 18:20:28.613974 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-96dv6"] Jan 23 18:20:28 crc kubenswrapper[4606]: I0123 18:20:28.706546 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96dv6" event={"ID":"1b560841-eda2-47a9-a6c5-85e76c078c3f","Type":"ContainerStarted","Data":"53ca1b1a6e58bfe024d336de95d3b495f22217b7dbb4c7d138b25dadc0c5e516"} Jan 23 18:20:29 crc kubenswrapper[4606]: I0123 18:20:29.718350 4606 generic.go:334] "Generic (PLEG): container finished" podID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerID="8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c" exitCode=0 Jan 23 18:20:29 crc kubenswrapper[4606]: I0123 18:20:29.718410 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96dv6" event={"ID":"1b560841-eda2-47a9-a6c5-85e76c078c3f","Type":"ContainerDied","Data":"8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c"} Jan 23 18:20:29 crc kubenswrapper[4606]: I0123 18:20:29.721007 4606 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:20:29 crc kubenswrapper[4606]: I0123 18:20:29.934013 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:20:29 crc kubenswrapper[4606]: I0123 18:20:29.934241 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:30 crc kubenswrapper[4606]: I0123 18:20:30.741102 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96dv6" event={"ID":"1b560841-eda2-47a9-a6c5-85e76c078c3f","Type":"ContainerStarted","Data":"5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39"} Jan 23 18:20:31 crc kubenswrapper[4606]: I0123 18:20:31.767493 4606 generic.go:334] "Generic (PLEG): container finished" podID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerID="5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39" exitCode=0 Jan 23 18:20:31 crc kubenswrapper[4606]: I0123 18:20:31.767580 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96dv6" event={"ID":"1b560841-eda2-47a9-a6c5-85e76c078c3f","Type":"ContainerDied","Data":"5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39"} Jan 23 18:20:33 crc kubenswrapper[4606]: I0123 18:20:33.789246 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96dv6" event={"ID":"1b560841-eda2-47a9-a6c5-85e76c078c3f","Type":"ContainerStarted","Data":"697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760"} Jan 23 18:20:33 crc kubenswrapper[4606]: I0123 18:20:33.827954 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-96dv6" podStartSLOduration=3.886337189 podStartE2EDuration="6.827932218s" podCreationTimestamp="2026-01-23 18:20:27 +0000 UTC" firstStartedPulling="2026-01-23 18:20:29.720776578 +0000 UTC m=+5263.720968984" lastFinishedPulling="2026-01-23 18:20:32.662371607 +0000 UTC m=+5266.662564013" observedRunningTime="2026-01-23 18:20:33.805866617 +0000 UTC m=+5267.806059023" watchObservedRunningTime="2026-01-23 18:20:33.827932218 +0000 UTC m=+5267.828124624" Jan 23 18:20:35 crc kubenswrapper[4606]: I0123 18:20:35.362997 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm_41aa3578-313b-4e05-a97c-778b0ad81093/util/0.log" Jan 23 18:20:35 crc kubenswrapper[4606]: I0123 18:20:35.561904 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm_41aa3578-313b-4e05-a97c-778b0ad81093/util/0.log" Jan 23 18:20:35 crc kubenswrapper[4606]: I0123 18:20:35.813530 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm_41aa3578-313b-4e05-a97c-778b0ad81093/pull/0.log" Jan 23 18:20:35 crc kubenswrapper[4606]: I0123 18:20:35.813579 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm_41aa3578-313b-4e05-a97c-778b0ad81093/pull/0.log" Jan 23 18:20:35 crc kubenswrapper[4606]: I0123 18:20:35.813667 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm_41aa3578-313b-4e05-a97c-778b0ad81093/pull/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.028673 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8_82b8bc6e-7be2-4e25-89f7-6cc353562560/util/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.029156 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm_41aa3578-313b-4e05-a97c-778b0ad81093/extract/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.064977 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4w9xm_41aa3578-313b-4e05-a97c-778b0ad81093/util/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.385748 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8_82b8bc6e-7be2-4e25-89f7-6cc353562560/util/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.387605 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8_82b8bc6e-7be2-4e25-89f7-6cc353562560/pull/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.462877 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8_82b8bc6e-7be2-4e25-89f7-6cc353562560/pull/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.597775 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8_82b8bc6e-7be2-4e25-89f7-6cc353562560/util/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.663370 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8_82b8bc6e-7be2-4e25-89f7-6cc353562560/pull/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.672288 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mtdn8_82b8bc6e-7be2-4e25-89f7-6cc353562560/extract/0.log" Jan 23 18:20:36 crc kubenswrapper[4606]: I0123 18:20:36.893782 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fkgdl_06d9e047-39b3-4c22-ac66-dfe3e5d215b3/extract-utilities/0.log" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.066608 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fkgdl_06d9e047-39b3-4c22-ac66-dfe3e5d215b3/extract-utilities/0.log" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.111640 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fkgdl_06d9e047-39b3-4c22-ac66-dfe3e5d215b3/extract-content/0.log" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.232620 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fkgdl_06d9e047-39b3-4c22-ac66-dfe3e5d215b3/extract-content/0.log" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.366578 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fkgdl_06d9e047-39b3-4c22-ac66-dfe3e5d215b3/extract-utilities/0.log" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.436504 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fkgdl_06d9e047-39b3-4c22-ac66-dfe3e5d215b3/extract-content/0.log" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.693577 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-65d6z_98f34be0-cadc-459c-ae73-eec312f1b1bd/extract-utilities/0.log" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.923049 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:37 crc kubenswrapper[4606]: I0123 18:20:37.923340 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.023149 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.044923 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-65d6z_98f34be0-cadc-459c-ae73-eec312f1b1bd/extract-content/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.069064 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fkgdl_06d9e047-39b3-4c22-ac66-dfe3e5d215b3/registry-server/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.110532 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-65d6z_98f34be0-cadc-459c-ae73-eec312f1b1bd/extract-utilities/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.116425 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-65d6z_98f34be0-cadc-459c-ae73-eec312f1b1bd/extract-content/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.296859 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-65d6z_98f34be0-cadc-459c-ae73-eec312f1b1bd/extract-content/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.385754 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-65d6z_98f34be0-cadc-459c-ae73-eec312f1b1bd/extract-utilities/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.659410 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cdqb6_9110ffb1-da7c-4a1b-95b4-28f757be8ace/marketplace-operator/1.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.890160 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.903712 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96dv6_1b560841-eda2-47a9-a6c5-85e76c078c3f/extract-utilities/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.947043 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cdqb6_9110ffb1-da7c-4a1b-95b4-28f757be8ace/marketplace-operator/0.log" Jan 23 18:20:38 crc kubenswrapper[4606]: I0123 18:20:38.977997 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-96dv6"] Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.293125 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96dv6_1b560841-eda2-47a9-a6c5-85e76c078c3f/extract-content/0.log" Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.318003 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-65d6z_98f34be0-cadc-459c-ae73-eec312f1b1bd/registry-server/0.log" Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.370046 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96dv6_1b560841-eda2-47a9-a6c5-85e76c078c3f/extract-utilities/0.log" Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.383368 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96dv6_1b560841-eda2-47a9-a6c5-85e76c078c3f/extract-content/0.log" Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.634302 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96dv6_1b560841-eda2-47a9-a6c5-85e76c078c3f/extract-utilities/0.log" Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.638912 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96dv6_1b560841-eda2-47a9-a6c5-85e76c078c3f/extract-content/0.log" Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.672695 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-96dv6_1b560841-eda2-47a9-a6c5-85e76c078c3f/registry-server/0.log" Jan 23 18:20:39 crc kubenswrapper[4606]: I0123 18:20:39.874680 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmtg9_bcf0df0a-2bd4-4927-b81a-056a26b821b2/extract-utilities/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.062993 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmtg9_bcf0df0a-2bd4-4927-b81a-056a26b821b2/extract-utilities/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.093985 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmtg9_bcf0df0a-2bd4-4927-b81a-056a26b821b2/extract-content/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.130663 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmtg9_bcf0df0a-2bd4-4927-b81a-056a26b821b2/extract-content/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.239278 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmtg9_bcf0df0a-2bd4-4927-b81a-056a26b821b2/extract-utilities/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.384963 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmtg9_bcf0df0a-2bd4-4927-b81a-056a26b821b2/extract-content/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.424773 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6dmk_0b0ff184-9aa5-4873-8c4c-eec703f8f541/extract-utilities/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.585676 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6dmk_0b0ff184-9aa5-4873-8c4c-eec703f8f541/extract-content/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.618092 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6dmk_0b0ff184-9aa5-4873-8c4c-eec703f8f541/extract-content/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.636518 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nmtg9_bcf0df0a-2bd4-4927-b81a-056a26b821b2/registry-server/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.662885 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6dmk_0b0ff184-9aa5-4873-8c4c-eec703f8f541/extract-utilities/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.854753 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-96dv6" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="registry-server" containerID="cri-o://697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760" gracePeriod=2 Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.870899 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6dmk_0b0ff184-9aa5-4873-8c4c-eec703f8f541/extract-content/0.log" Jan 23 18:20:40 crc kubenswrapper[4606]: I0123 18:20:40.883745 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6dmk_0b0ff184-9aa5-4873-8c4c-eec703f8f541/extract-utilities/0.log" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.406305 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.578286 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-utilities\") pod \"1b560841-eda2-47a9-a6c5-85e76c078c3f\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.578690 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xw8m\" (UniqueName: \"kubernetes.io/projected/1b560841-eda2-47a9-a6c5-85e76c078c3f-kube-api-access-7xw8m\") pod \"1b560841-eda2-47a9-a6c5-85e76c078c3f\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.578769 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-catalog-content\") pod \"1b560841-eda2-47a9-a6c5-85e76c078c3f\" (UID: \"1b560841-eda2-47a9-a6c5-85e76c078c3f\") " Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.583671 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-utilities" (OuterVolumeSpecName: "utilities") pod "1b560841-eda2-47a9-a6c5-85e76c078c3f" (UID: "1b560841-eda2-47a9-a6c5-85e76c078c3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.625240 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b560841-eda2-47a9-a6c5-85e76c078c3f" (UID: "1b560841-eda2-47a9-a6c5-85e76c078c3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.633792 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b560841-eda2-47a9-a6c5-85e76c078c3f-kube-api-access-7xw8m" (OuterVolumeSpecName: "kube-api-access-7xw8m") pod "1b560841-eda2-47a9-a6c5-85e76c078c3f" (UID: "1b560841-eda2-47a9-a6c5-85e76c078c3f"). InnerVolumeSpecName "kube-api-access-7xw8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.680908 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.680933 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b560841-eda2-47a9-a6c5-85e76c078c3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.680945 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xw8m\" (UniqueName: \"kubernetes.io/projected/1b560841-eda2-47a9-a6c5-85e76c078c3f-kube-api-access-7xw8m\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.685019 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w6dmk_0b0ff184-9aa5-4873-8c4c-eec703f8f541/registry-server/0.log" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.865569 4606 generic.go:334] "Generic (PLEG): container finished" podID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerID="697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760" exitCode=0 Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.865674 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96dv6" event={"ID":"1b560841-eda2-47a9-a6c5-85e76c078c3f","Type":"ContainerDied","Data":"697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760"} Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.865697 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-96dv6" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.865725 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-96dv6" event={"ID":"1b560841-eda2-47a9-a6c5-85e76c078c3f","Type":"ContainerDied","Data":"53ca1b1a6e58bfe024d336de95d3b495f22217b7dbb4c7d138b25dadc0c5e516"} Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.865755 4606 scope.go:117] "RemoveContainer" containerID="697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.902610 4606 scope.go:117] "RemoveContainer" containerID="5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.910381 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-96dv6"] Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.923165 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-96dv6"] Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.929076 4606 scope.go:117] "RemoveContainer" containerID="8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.970763 4606 scope.go:117] "RemoveContainer" containerID="697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760" Jan 23 18:20:41 crc kubenswrapper[4606]: E0123 18:20:41.971146 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760\": container with ID starting with 697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760 not found: ID does not exist" containerID="697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.971181 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760"} err="failed to get container status \"697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760\": rpc error: code = NotFound desc = could not find container \"697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760\": container with ID starting with 697281539cf3c2182121772e92b7ac346259a2dbc6f6b7949479965921486760 not found: ID does not exist" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.971210 4606 scope.go:117] "RemoveContainer" containerID="5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39" Jan 23 18:20:41 crc kubenswrapper[4606]: E0123 18:20:41.971501 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39\": container with ID starting with 5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39 not found: ID does not exist" containerID="5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.971533 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39"} err="failed to get container status \"5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39\": rpc error: code = NotFound desc = could not find container \"5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39\": container with ID starting with 5cfe1c88a29720e080ac0433ebe288049425a6811480ebd493eea7ad5e801b39 not found: ID does not exist" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.971551 4606 scope.go:117] "RemoveContainer" containerID="8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c" Jan 23 18:20:41 crc kubenswrapper[4606]: E0123 18:20:41.971968 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c\": container with ID starting with 8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c not found: ID does not exist" containerID="8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c" Jan 23 18:20:41 crc kubenswrapper[4606]: I0123 18:20:41.971991 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c"} err="failed to get container status \"8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c\": rpc error: code = NotFound desc = could not find container \"8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c\": container with ID starting with 8056cd98c7435803b99541507219ede1e1a95d3490b4bbcf60455dceb26d208c not found: ID does not exist" Jan 23 18:20:42 crc kubenswrapper[4606]: I0123 18:20:42.378212 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" path="/var/lib/kubelet/pods/1b560841-eda2-47a9-a6c5-85e76c078c3f/volumes" Jan 23 18:20:59 crc kubenswrapper[4606]: I0123 18:20:59.933691 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:20:59 crc kubenswrapper[4606]: I0123 18:20:59.934216 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:59 crc kubenswrapper[4606]: I0123 18:20:59.934263 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 18:20:59 crc kubenswrapper[4606]: I0123 18:20:59.934960 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:20:59 crc kubenswrapper[4606]: I0123 18:20:59.935010 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" gracePeriod=600 Jan 23 18:21:00 crc kubenswrapper[4606]: E0123 18:21:00.065065 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:21:01 crc kubenswrapper[4606]: I0123 18:21:01.054027 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" exitCode=0 Jan 23 18:21:01 crc kubenswrapper[4606]: I0123 18:21:01.054072 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c"} Jan 23 18:21:01 crc kubenswrapper[4606]: I0123 18:21:01.054356 4606 scope.go:117] "RemoveContainer" containerID="3e20276909d063d72910761ea3cff4cdf7109a1b4f91e48afefdf67b8397aeae" Jan 23 18:21:01 crc kubenswrapper[4606]: I0123 18:21:01.055742 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:21:01 crc kubenswrapper[4606]: E0123 18:21:01.056335 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:21:12 crc kubenswrapper[4606]: I0123 18:21:12.366872 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:21:12 crc kubenswrapper[4606]: E0123 18:21:12.367727 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:21:26 crc kubenswrapper[4606]: I0123 18:21:26.371403 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:21:26 crc kubenswrapper[4606]: E0123 18:21:26.372336 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:21:41 crc kubenswrapper[4606]: I0123 18:21:41.366634 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:21:41 crc kubenswrapper[4606]: E0123 18:21:41.367509 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:21:54 crc kubenswrapper[4606]: I0123 18:21:54.369422 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:21:54 crc kubenswrapper[4606]: E0123 18:21:54.370260 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:22:08 crc kubenswrapper[4606]: I0123 18:22:08.367013 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:22:08 crc kubenswrapper[4606]: E0123 18:22:08.367922 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:22:23 crc kubenswrapper[4606]: I0123 18:22:23.366047 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:22:23 crc kubenswrapper[4606]: E0123 18:22:23.366705 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:22:35 crc kubenswrapper[4606]: I0123 18:22:35.366271 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:22:35 crc kubenswrapper[4606]: E0123 18:22:35.366958 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.068022 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drdjg"] Jan 23 18:22:46 crc kubenswrapper[4606]: E0123 18:22:46.069056 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="extract-content" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.069074 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="extract-content" Jan 23 18:22:46 crc kubenswrapper[4606]: E0123 18:22:46.069087 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="registry-server" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.069094 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="registry-server" Jan 23 18:22:46 crc kubenswrapper[4606]: E0123 18:22:46.069116 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="extract-utilities" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.069125 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="extract-utilities" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.069377 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b560841-eda2-47a9-a6c5-85e76c078c3f" containerName="registry-server" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.070981 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.092458 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drdjg"] Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.172637 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-catalog-content\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.172714 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-utilities\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.172824 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsr7d\" (UniqueName: \"kubernetes.io/projected/2ca1e362-ea26-49f9-b0e9-8710294f3914-kube-api-access-fsr7d\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.274614 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsr7d\" (UniqueName: \"kubernetes.io/projected/2ca1e362-ea26-49f9-b0e9-8710294f3914-kube-api-access-fsr7d\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.274987 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-catalog-content\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.275131 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-utilities\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.275690 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-catalog-content\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.275752 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-utilities\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.310373 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsr7d\" (UniqueName: \"kubernetes.io/projected/2ca1e362-ea26-49f9-b0e9-8710294f3914-kube-api-access-fsr7d\") pod \"certified-operators-drdjg\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:46 crc kubenswrapper[4606]: I0123 18:22:46.434916 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:47 crc kubenswrapper[4606]: I0123 18:22:47.034053 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drdjg"] Jan 23 18:22:47 crc kubenswrapper[4606]: I0123 18:22:47.365617 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:22:47 crc kubenswrapper[4606]: E0123 18:22:47.365936 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:22:48 crc kubenswrapper[4606]: I0123 18:22:48.049222 4606 generic.go:334] "Generic (PLEG): container finished" podID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerID="39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1" exitCode=0 Jan 23 18:22:48 crc kubenswrapper[4606]: I0123 18:22:48.049471 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drdjg" event={"ID":"2ca1e362-ea26-49f9-b0e9-8710294f3914","Type":"ContainerDied","Data":"39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1"} Jan 23 18:22:48 crc kubenswrapper[4606]: I0123 18:22:48.049550 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drdjg" event={"ID":"2ca1e362-ea26-49f9-b0e9-8710294f3914","Type":"ContainerStarted","Data":"a98a91e802db9715913372a7fedacc7be9b71433caf47e2d3b2ca5ff846b2042"} Jan 23 18:22:49 crc kubenswrapper[4606]: I0123 18:22:49.059800 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drdjg" event={"ID":"2ca1e362-ea26-49f9-b0e9-8710294f3914","Type":"ContainerStarted","Data":"b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6"} Jan 23 18:22:50 crc kubenswrapper[4606]: I0123 18:22:50.071375 4606 generic.go:334] "Generic (PLEG): container finished" podID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerID="b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6" exitCode=0 Jan 23 18:22:50 crc kubenswrapper[4606]: I0123 18:22:50.071752 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drdjg" event={"ID":"2ca1e362-ea26-49f9-b0e9-8710294f3914","Type":"ContainerDied","Data":"b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6"} Jan 23 18:22:50 crc kubenswrapper[4606]: I0123 18:22:50.127123 4606 scope.go:117] "RemoveContainer" containerID="073fd2397714e1cfd0a896b60d82cc02c7e50761a3d756630b2aff2472dfbef6" Jan 23 18:22:51 crc kubenswrapper[4606]: I0123 18:22:51.085560 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drdjg" event={"ID":"2ca1e362-ea26-49f9-b0e9-8710294f3914","Type":"ContainerStarted","Data":"50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f"} Jan 23 18:22:51 crc kubenswrapper[4606]: I0123 18:22:51.113895 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drdjg" podStartSLOduration=2.598138665 podStartE2EDuration="5.113870739s" podCreationTimestamp="2026-01-23 18:22:46 +0000 UTC" firstStartedPulling="2026-01-23 18:22:48.051925651 +0000 UTC m=+5402.052118067" lastFinishedPulling="2026-01-23 18:22:50.567657725 +0000 UTC m=+5404.567850141" observedRunningTime="2026-01-23 18:22:51.101373383 +0000 UTC m=+5405.101565809" watchObservedRunningTime="2026-01-23 18:22:51.113870739 +0000 UTC m=+5405.114063155" Jan 23 18:22:56 crc kubenswrapper[4606]: I0123 18:22:56.436540 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:56 crc kubenswrapper[4606]: I0123 18:22:56.437079 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:56 crc kubenswrapper[4606]: I0123 18:22:56.480333 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:57 crc kubenswrapper[4606]: I0123 18:22:57.192231 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:58 crc kubenswrapper[4606]: I0123 18:22:58.206546 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drdjg"] Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.156651 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drdjg" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="registry-server" containerID="cri-o://50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f" gracePeriod=2 Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.656186 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.784024 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-utilities\") pod \"2ca1e362-ea26-49f9-b0e9-8710294f3914\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.784390 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsr7d\" (UniqueName: \"kubernetes.io/projected/2ca1e362-ea26-49f9-b0e9-8710294f3914-kube-api-access-fsr7d\") pod \"2ca1e362-ea26-49f9-b0e9-8710294f3914\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.784505 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-catalog-content\") pod \"2ca1e362-ea26-49f9-b0e9-8710294f3914\" (UID: \"2ca1e362-ea26-49f9-b0e9-8710294f3914\") " Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.785380 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-utilities" (OuterVolumeSpecName: "utilities") pod "2ca1e362-ea26-49f9-b0e9-8710294f3914" (UID: "2ca1e362-ea26-49f9-b0e9-8710294f3914"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.790956 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca1e362-ea26-49f9-b0e9-8710294f3914-kube-api-access-fsr7d" (OuterVolumeSpecName: "kube-api-access-fsr7d") pod "2ca1e362-ea26-49f9-b0e9-8710294f3914" (UID: "2ca1e362-ea26-49f9-b0e9-8710294f3914"). InnerVolumeSpecName "kube-api-access-fsr7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.833983 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca1e362-ea26-49f9-b0e9-8710294f3914" (UID: "2ca1e362-ea26-49f9-b0e9-8710294f3914"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.887813 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.888474 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsr7d\" (UniqueName: \"kubernetes.io/projected/2ca1e362-ea26-49f9-b0e9-8710294f3914-kube-api-access-fsr7d\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:59 crc kubenswrapper[4606]: I0123 18:22:59.888573 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1e362-ea26-49f9-b0e9-8710294f3914-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.168674 4606 generic.go:334] "Generic (PLEG): container finished" podID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerID="50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f" exitCode=0 Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.168966 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drdjg" event={"ID":"2ca1e362-ea26-49f9-b0e9-8710294f3914","Type":"ContainerDied","Data":"50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f"} Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.169061 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drdjg" event={"ID":"2ca1e362-ea26-49f9-b0e9-8710294f3914","Type":"ContainerDied","Data":"a98a91e802db9715913372a7fedacc7be9b71433caf47e2d3b2ca5ff846b2042"} Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.169142 4606 scope.go:117] "RemoveContainer" containerID="50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.169411 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drdjg" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.192544 4606 scope.go:117] "RemoveContainer" containerID="b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.227694 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drdjg"] Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.237919 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drdjg"] Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.249400 4606 scope.go:117] "RemoveContainer" containerID="39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.285133 4606 scope.go:117] "RemoveContainer" containerID="50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f" Jan 23 18:23:00 crc kubenswrapper[4606]: E0123 18:23:00.285725 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f\": container with ID starting with 50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f not found: ID does not exist" containerID="50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.285755 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f"} err="failed to get container status \"50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f\": rpc error: code = NotFound desc = could not find container \"50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f\": container with ID starting with 50b5538edc0cb4b7c4bb07b549fb9f3335b376682899f85d9637cc11779d8a4f not found: ID does not exist" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.285773 4606 scope.go:117] "RemoveContainer" containerID="b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6" Jan 23 18:23:00 crc kubenswrapper[4606]: E0123 18:23:00.286088 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6\": container with ID starting with b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6 not found: ID does not exist" containerID="b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.286110 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6"} err="failed to get container status \"b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6\": rpc error: code = NotFound desc = could not find container \"b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6\": container with ID starting with b5f043132ccf626f9af5c9578d397b453284ce0581baa67793b32c284c25a9f6 not found: ID does not exist" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.286148 4606 scope.go:117] "RemoveContainer" containerID="39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1" Jan 23 18:23:00 crc kubenswrapper[4606]: E0123 18:23:00.286509 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1\": container with ID starting with 39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1 not found: ID does not exist" containerID="39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.286574 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1"} err="failed to get container status \"39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1\": rpc error: code = NotFound desc = could not find container \"39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1\": container with ID starting with 39fc5dcf583c66f9888deb47dfe8c833b43d93cc3512ffaea3fac5e74e148ee1 not found: ID does not exist" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.366026 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:23:00 crc kubenswrapper[4606]: E0123 18:23:00.366564 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:23:00 crc kubenswrapper[4606]: I0123 18:23:00.404090 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" path="/var/lib/kubelet/pods/2ca1e362-ea26-49f9-b0e9-8710294f3914/volumes" Jan 23 18:23:12 crc kubenswrapper[4606]: I0123 18:23:12.365891 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:23:12 crc kubenswrapper[4606]: E0123 18:23:12.366934 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:23:27 crc kubenswrapper[4606]: I0123 18:23:27.365693 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:23:27 crc kubenswrapper[4606]: E0123 18:23:27.366497 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:23:28 crc kubenswrapper[4606]: I0123 18:23:28.425633 4606 generic.go:334] "Generic (PLEG): container finished" podID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerID="579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f" exitCode=0 Jan 23 18:23:28 crc kubenswrapper[4606]: I0123 18:23:28.425874 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-svxgb/must-gather-7tl24" event={"ID":"65f3a5e4-2fdf-47c8-ac22-08dcf1141171","Type":"ContainerDied","Data":"579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f"} Jan 23 18:23:28 crc kubenswrapper[4606]: I0123 18:23:28.426705 4606 scope.go:117] "RemoveContainer" containerID="579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f" Jan 23 18:23:28 crc kubenswrapper[4606]: I0123 18:23:28.996540 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-svxgb_must-gather-7tl24_65f3a5e4-2fdf-47c8-ac22-08dcf1141171/gather/0.log" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.540761 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-svxgb/must-gather-7tl24"] Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.542291 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-svxgb/must-gather-7tl24" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerName="copy" containerID="cri-o://f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597" gracePeriod=2 Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.562969 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-svxgb/must-gather-7tl24"] Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.576964 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lz7jh"] Jan 23 18:23:38 crc kubenswrapper[4606]: E0123 18:23:38.577343 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="registry-server" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.577359 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="registry-server" Jan 23 18:23:38 crc kubenswrapper[4606]: E0123 18:23:38.577378 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerName="copy" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.577385 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerName="copy" Jan 23 18:23:38 crc kubenswrapper[4606]: E0123 18:23:38.577407 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="extract-utilities" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.577415 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="extract-utilities" Jan 23 18:23:38 crc kubenswrapper[4606]: E0123 18:23:38.577430 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="extract-content" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.577436 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="extract-content" Jan 23 18:23:38 crc kubenswrapper[4606]: E0123 18:23:38.577442 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerName="gather" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.577448 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerName="gather" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.578826 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerName="gather" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.578849 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerName="copy" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.578875 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca1e362-ea26-49f9-b0e9-8710294f3914" containerName="registry-server" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.580390 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.610915 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lz7jh"] Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.671871 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdgj\" (UniqueName: \"kubernetes.io/projected/8114ca2e-618b-4e69-b51c-a3025769adf3-kube-api-access-9qdgj\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.672300 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-utilities\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.672393 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-catalog-content\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.774188 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-catalog-content\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.774314 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdgj\" (UniqueName: \"kubernetes.io/projected/8114ca2e-618b-4e69-b51c-a3025769adf3-kube-api-access-9qdgj\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.774357 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-utilities\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.774922 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-utilities\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.775138 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-catalog-content\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.864876 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdgj\" (UniqueName: \"kubernetes.io/projected/8114ca2e-618b-4e69-b51c-a3025769adf3-kube-api-access-9qdgj\") pod \"community-operators-lz7jh\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:38 crc kubenswrapper[4606]: I0123 18:23:38.950122 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.108397 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-svxgb_must-gather-7tl24_65f3a5e4-2fdf-47c8-ac22-08dcf1141171/copy/0.log" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.112282 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.182294 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-must-gather-output\") pod \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.182476 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngtx\" (UniqueName: \"kubernetes.io/projected/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-kube-api-access-dngtx\") pod \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\" (UID: \"65f3a5e4-2fdf-47c8-ac22-08dcf1141171\") " Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.190338 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-kube-api-access-dngtx" (OuterVolumeSpecName: "kube-api-access-dngtx") pod "65f3a5e4-2fdf-47c8-ac22-08dcf1141171" (UID: "65f3a5e4-2fdf-47c8-ac22-08dcf1141171"). InnerVolumeSpecName "kube-api-access-dngtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.287072 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngtx\" (UniqueName: \"kubernetes.io/projected/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-kube-api-access-dngtx\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.483882 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "65f3a5e4-2fdf-47c8-ac22-08dcf1141171" (UID: "65f3a5e4-2fdf-47c8-ac22-08dcf1141171"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.491892 4606 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f3a5e4-2fdf-47c8-ac22-08dcf1141171-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.523624 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lz7jh"] Jan 23 18:23:39 crc kubenswrapper[4606]: W0123 18:23:39.526858 4606 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8114ca2e_618b_4e69_b51c_a3025769adf3.slice/crio-573a2eaacdb33b41b4cabec220998db6667b06cf3152185317ca3c68ea4de82b WatchSource:0}: Error finding container 573a2eaacdb33b41b4cabec220998db6667b06cf3152185317ca3c68ea4de82b: Status 404 returned error can't find the container with id 573a2eaacdb33b41b4cabec220998db6667b06cf3152185317ca3c68ea4de82b Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.557198 4606 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-svxgb_must-gather-7tl24_65f3a5e4-2fdf-47c8-ac22-08dcf1141171/copy/0.log" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.557561 4606 generic.go:334] "Generic (PLEG): container finished" podID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" containerID="f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597" exitCode=143 Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.557642 4606 scope.go:117] "RemoveContainer" containerID="f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.557637 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-svxgb/must-gather-7tl24" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.561346 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz7jh" event={"ID":"8114ca2e-618b-4e69-b51c-a3025769adf3","Type":"ContainerStarted","Data":"573a2eaacdb33b41b4cabec220998db6667b06cf3152185317ca3c68ea4de82b"} Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.619456 4606 scope.go:117] "RemoveContainer" containerID="579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.664721 4606 scope.go:117] "RemoveContainer" containerID="f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597" Jan 23 18:23:39 crc kubenswrapper[4606]: E0123 18:23:39.666435 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597\": container with ID starting with f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597 not found: ID does not exist" containerID="f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.666492 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597"} err="failed to get container status \"f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597\": rpc error: code = NotFound desc = could not find container \"f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597\": container with ID starting with f831fead9a5dadc853a2db7d80ecbe2a74ba9b8855e5768ef8130b50c2082597 not found: ID does not exist" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.666531 4606 scope.go:117] "RemoveContainer" containerID="579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f" Jan 23 18:23:39 crc kubenswrapper[4606]: E0123 18:23:39.682113 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f\": container with ID starting with 579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f not found: ID does not exist" containerID="579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f" Jan 23 18:23:39 crc kubenswrapper[4606]: I0123 18:23:39.682160 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f"} err="failed to get container status \"579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f\": rpc error: code = NotFound desc = could not find container \"579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f\": container with ID starting with 579e4fe566a2c65cfe1687017d12ac14bc35842c8774df7be6a8a7e26c86994f not found: ID does not exist" Jan 23 18:23:40 crc kubenswrapper[4606]: I0123 18:23:40.368133 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:23:40 crc kubenswrapper[4606]: E0123 18:23:40.368729 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:23:40 crc kubenswrapper[4606]: I0123 18:23:40.376492 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f3a5e4-2fdf-47c8-ac22-08dcf1141171" path="/var/lib/kubelet/pods/65f3a5e4-2fdf-47c8-ac22-08dcf1141171/volumes" Jan 23 18:23:40 crc kubenswrapper[4606]: I0123 18:23:40.574818 4606 generic.go:334] "Generic (PLEG): container finished" podID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerID="5b986ae2e896ccfb0e8772e942726adef19bf95f649ebc9e88c26dbaf9b6de7b" exitCode=0 Jan 23 18:23:40 crc kubenswrapper[4606]: I0123 18:23:40.574870 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz7jh" event={"ID":"8114ca2e-618b-4e69-b51c-a3025769adf3","Type":"ContainerDied","Data":"5b986ae2e896ccfb0e8772e942726adef19bf95f649ebc9e88c26dbaf9b6de7b"} Jan 23 18:23:41 crc kubenswrapper[4606]: I0123 18:23:41.585995 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz7jh" event={"ID":"8114ca2e-618b-4e69-b51c-a3025769adf3","Type":"ContainerStarted","Data":"f7533b38140c79814f61151213fa2e00914b3ee5396682635bcd2259aa08330d"} Jan 23 18:23:42 crc kubenswrapper[4606]: I0123 18:23:42.594574 4606 generic.go:334] "Generic (PLEG): container finished" podID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerID="f7533b38140c79814f61151213fa2e00914b3ee5396682635bcd2259aa08330d" exitCode=0 Jan 23 18:23:42 crc kubenswrapper[4606]: I0123 18:23:42.594633 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz7jh" event={"ID":"8114ca2e-618b-4e69-b51c-a3025769adf3","Type":"ContainerDied","Data":"f7533b38140c79814f61151213fa2e00914b3ee5396682635bcd2259aa08330d"} Jan 23 18:23:43 crc kubenswrapper[4606]: I0123 18:23:43.604455 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz7jh" event={"ID":"8114ca2e-618b-4e69-b51c-a3025769adf3","Type":"ContainerStarted","Data":"0d0b63c5b11bb50aaba508b0f9daa853cfa74838ea17bc3089946747c1974e90"} Jan 23 18:23:43 crc kubenswrapper[4606]: I0123 18:23:43.628796 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lz7jh" podStartSLOduration=3.1323829930000002 podStartE2EDuration="5.628777323s" podCreationTimestamp="2026-01-23 18:23:38 +0000 UTC" firstStartedPulling="2026-01-23 18:23:40.576894472 +0000 UTC m=+5454.577086878" lastFinishedPulling="2026-01-23 18:23:43.073288792 +0000 UTC m=+5457.073481208" observedRunningTime="2026-01-23 18:23:43.622885509 +0000 UTC m=+5457.623077915" watchObservedRunningTime="2026-01-23 18:23:43.628777323 +0000 UTC m=+5457.628969719" Jan 23 18:23:48 crc kubenswrapper[4606]: I0123 18:23:48.954755 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:48 crc kubenswrapper[4606]: I0123 18:23:48.955366 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:49 crc kubenswrapper[4606]: I0123 18:23:49.051655 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:49 crc kubenswrapper[4606]: I0123 18:23:49.719631 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:49 crc kubenswrapper[4606]: I0123 18:23:49.784046 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lz7jh"] Jan 23 18:23:50 crc kubenswrapper[4606]: I0123 18:23:50.199095 4606 scope.go:117] "RemoveContainer" containerID="3c396706ce880b5d07ee835929ec5618f80f78eb936eabf51e8804b9d8303b7f" Jan 23 18:23:51 crc kubenswrapper[4606]: I0123 18:23:51.699564 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lz7jh" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="registry-server" containerID="cri-o://0d0b63c5b11bb50aaba508b0f9daa853cfa74838ea17bc3089946747c1974e90" gracePeriod=2 Jan 23 18:23:52 crc kubenswrapper[4606]: I0123 18:23:52.713604 4606 generic.go:334] "Generic (PLEG): container finished" podID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerID="0d0b63c5b11bb50aaba508b0f9daa853cfa74838ea17bc3089946747c1974e90" exitCode=0 Jan 23 18:23:52 crc kubenswrapper[4606]: I0123 18:23:52.713816 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz7jh" event={"ID":"8114ca2e-618b-4e69-b51c-a3025769adf3","Type":"ContainerDied","Data":"0d0b63c5b11bb50aaba508b0f9daa853cfa74838ea17bc3089946747c1974e90"} Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.345104 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.366523 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:23:53 crc kubenswrapper[4606]: E0123 18:23:53.367142 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.382947 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-utilities\") pod \"8114ca2e-618b-4e69-b51c-a3025769adf3\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.383207 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-catalog-content\") pod \"8114ca2e-618b-4e69-b51c-a3025769adf3\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.383269 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qdgj\" (UniqueName: \"kubernetes.io/projected/8114ca2e-618b-4e69-b51c-a3025769adf3-kube-api-access-9qdgj\") pod \"8114ca2e-618b-4e69-b51c-a3025769adf3\" (UID: \"8114ca2e-618b-4e69-b51c-a3025769adf3\") " Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.383767 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-utilities" (OuterVolumeSpecName: "utilities") pod "8114ca2e-618b-4e69-b51c-a3025769adf3" (UID: "8114ca2e-618b-4e69-b51c-a3025769adf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.391177 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8114ca2e-618b-4e69-b51c-a3025769adf3-kube-api-access-9qdgj" (OuterVolumeSpecName: "kube-api-access-9qdgj") pod "8114ca2e-618b-4e69-b51c-a3025769adf3" (UID: "8114ca2e-618b-4e69-b51c-a3025769adf3"). InnerVolumeSpecName "kube-api-access-9qdgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.460026 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8114ca2e-618b-4e69-b51c-a3025769adf3" (UID: "8114ca2e-618b-4e69-b51c-a3025769adf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.485569 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.485631 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qdgj\" (UniqueName: \"kubernetes.io/projected/8114ca2e-618b-4e69-b51c-a3025769adf3-kube-api-access-9qdgj\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.485642 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114ca2e-618b-4e69-b51c-a3025769adf3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.726684 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz7jh" event={"ID":"8114ca2e-618b-4e69-b51c-a3025769adf3","Type":"ContainerDied","Data":"573a2eaacdb33b41b4cabec220998db6667b06cf3152185317ca3c68ea4de82b"} Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.728707 4606 scope.go:117] "RemoveContainer" containerID="0d0b63c5b11bb50aaba508b0f9daa853cfa74838ea17bc3089946747c1974e90" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.728621 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz7jh" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.757791 4606 scope.go:117] "RemoveContainer" containerID="f7533b38140c79814f61151213fa2e00914b3ee5396682635bcd2259aa08330d" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.780674 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lz7jh"] Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.788235 4606 scope.go:117] "RemoveContainer" containerID="5b986ae2e896ccfb0e8772e942726adef19bf95f649ebc9e88c26dbaf9b6de7b" Jan 23 18:23:53 crc kubenswrapper[4606]: I0123 18:23:53.789079 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lz7jh"] Jan 23 18:23:54 crc kubenswrapper[4606]: I0123 18:23:54.382438 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" path="/var/lib/kubelet/pods/8114ca2e-618b-4e69-b51c-a3025769adf3/volumes" Jan 23 18:24:05 crc kubenswrapper[4606]: I0123 18:24:05.365822 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:24:05 crc kubenswrapper[4606]: E0123 18:24:05.367725 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:24:20 crc kubenswrapper[4606]: I0123 18:24:20.365790 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:24:20 crc kubenswrapper[4606]: E0123 18:24:20.366643 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:24:32 crc kubenswrapper[4606]: I0123 18:24:32.366257 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:24:32 crc kubenswrapper[4606]: E0123 18:24:32.367172 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:24:46 crc kubenswrapper[4606]: I0123 18:24:46.372401 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:24:46 crc kubenswrapper[4606]: E0123 18:24:46.373510 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:24:58 crc kubenswrapper[4606]: I0123 18:24:58.367412 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:24:58 crc kubenswrapper[4606]: E0123 18:24:58.368432 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.021167 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t76m2"] Jan 23 18:25:00 crc kubenswrapper[4606]: E0123 18:25:00.021562 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="registry-server" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.021574 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="registry-server" Jan 23 18:25:00 crc kubenswrapper[4606]: E0123 18:25:00.021623 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="extract-content" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.021629 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="extract-content" Jan 23 18:25:00 crc kubenswrapper[4606]: E0123 18:25:00.021637 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="extract-utilities" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.021645 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="extract-utilities" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.021817 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114ca2e-618b-4e69-b51c-a3025769adf3" containerName="registry-server" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.023195 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.038513 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t76m2"] Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.130513 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-utilities\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.130930 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-catalog-content\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.131197 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfzx\" (UniqueName: \"kubernetes.io/projected/a9778600-0f91-4d21-a6e8-7266b2d33e26-kube-api-access-6tfzx\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.233206 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-catalog-content\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.233312 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfzx\" (UniqueName: \"kubernetes.io/projected/a9778600-0f91-4d21-a6e8-7266b2d33e26-kube-api-access-6tfzx\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.233397 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-utilities\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.233835 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-catalog-content\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.233846 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-utilities\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.253532 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfzx\" (UniqueName: \"kubernetes.io/projected/a9778600-0f91-4d21-a6e8-7266b2d33e26-kube-api-access-6tfzx\") pod \"redhat-operators-t76m2\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.358383 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:00 crc kubenswrapper[4606]: I0123 18:25:00.851646 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t76m2"] Jan 23 18:25:01 crc kubenswrapper[4606]: I0123 18:25:01.659041 4606 generic.go:334] "Generic (PLEG): container finished" podID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerID="66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec" exitCode=0 Jan 23 18:25:01 crc kubenswrapper[4606]: I0123 18:25:01.659296 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76m2" event={"ID":"a9778600-0f91-4d21-a6e8-7266b2d33e26","Type":"ContainerDied","Data":"66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec"} Jan 23 18:25:01 crc kubenswrapper[4606]: I0123 18:25:01.659332 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76m2" event={"ID":"a9778600-0f91-4d21-a6e8-7266b2d33e26","Type":"ContainerStarted","Data":"5b86f000001ae5e13e6390caf266157e4ba49d797f0bd6dd166aa88baa9a0b87"} Jan 23 18:25:02 crc kubenswrapper[4606]: I0123 18:25:02.672568 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76m2" event={"ID":"a9778600-0f91-4d21-a6e8-7266b2d33e26","Type":"ContainerStarted","Data":"63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75"} Jan 23 18:25:06 crc kubenswrapper[4606]: I0123 18:25:06.708096 4606 generic.go:334] "Generic (PLEG): container finished" podID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerID="63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75" exitCode=0 Jan 23 18:25:06 crc kubenswrapper[4606]: I0123 18:25:06.708274 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76m2" event={"ID":"a9778600-0f91-4d21-a6e8-7266b2d33e26","Type":"ContainerDied","Data":"63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75"} Jan 23 18:25:07 crc kubenswrapper[4606]: I0123 18:25:07.731196 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76m2" event={"ID":"a9778600-0f91-4d21-a6e8-7266b2d33e26","Type":"ContainerStarted","Data":"eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f"} Jan 23 18:25:07 crc kubenswrapper[4606]: I0123 18:25:07.758328 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t76m2" podStartSLOduration=3.240152033 podStartE2EDuration="8.75830582s" podCreationTimestamp="2026-01-23 18:24:59 +0000 UTC" firstStartedPulling="2026-01-23 18:25:01.661382248 +0000 UTC m=+5535.661574654" lastFinishedPulling="2026-01-23 18:25:07.179536035 +0000 UTC m=+5541.179728441" observedRunningTime="2026-01-23 18:25:07.746748296 +0000 UTC m=+5541.746940712" watchObservedRunningTime="2026-01-23 18:25:07.75830582 +0000 UTC m=+5541.758498226" Jan 23 18:25:10 crc kubenswrapper[4606]: I0123 18:25:10.358813 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:10 crc kubenswrapper[4606]: I0123 18:25:10.359081 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:11 crc kubenswrapper[4606]: I0123 18:25:11.416951 4606 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t76m2" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="registry-server" probeResult="failure" output=< Jan 23 18:25:11 crc kubenswrapper[4606]: timeout: failed to connect service ":50051" within 1s Jan 23 18:25:11 crc kubenswrapper[4606]: > Jan 23 18:25:12 crc kubenswrapper[4606]: I0123 18:25:12.366567 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:25:12 crc kubenswrapper[4606]: E0123 18:25:12.367035 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:25:20 crc kubenswrapper[4606]: I0123 18:25:20.405792 4606 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:20 crc kubenswrapper[4606]: I0123 18:25:20.461830 4606 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:20 crc kubenswrapper[4606]: I0123 18:25:20.646649 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t76m2"] Jan 23 18:25:21 crc kubenswrapper[4606]: I0123 18:25:21.857753 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t76m2" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="registry-server" containerID="cri-o://eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f" gracePeriod=2 Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.342889 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.402615 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tfzx\" (UniqueName: \"kubernetes.io/projected/a9778600-0f91-4d21-a6e8-7266b2d33e26-kube-api-access-6tfzx\") pod \"a9778600-0f91-4d21-a6e8-7266b2d33e26\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.402746 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-catalog-content\") pod \"a9778600-0f91-4d21-a6e8-7266b2d33e26\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.402805 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-utilities\") pod \"a9778600-0f91-4d21-a6e8-7266b2d33e26\" (UID: \"a9778600-0f91-4d21-a6e8-7266b2d33e26\") " Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.404477 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-utilities" (OuterVolumeSpecName: "utilities") pod "a9778600-0f91-4d21-a6e8-7266b2d33e26" (UID: "a9778600-0f91-4d21-a6e8-7266b2d33e26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.420806 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9778600-0f91-4d21-a6e8-7266b2d33e26-kube-api-access-6tfzx" (OuterVolumeSpecName: "kube-api-access-6tfzx") pod "a9778600-0f91-4d21-a6e8-7266b2d33e26" (UID: "a9778600-0f91-4d21-a6e8-7266b2d33e26"). InnerVolumeSpecName "kube-api-access-6tfzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.505972 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tfzx\" (UniqueName: \"kubernetes.io/projected/a9778600-0f91-4d21-a6e8-7266b2d33e26-kube-api-access-6tfzx\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.506009 4606 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.531533 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9778600-0f91-4d21-a6e8-7266b2d33e26" (UID: "a9778600-0f91-4d21-a6e8-7266b2d33e26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.608072 4606 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9778600-0f91-4d21-a6e8-7266b2d33e26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.868519 4606 generic.go:334] "Generic (PLEG): container finished" podID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerID="eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f" exitCode=0 Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.868553 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76m2" event={"ID":"a9778600-0f91-4d21-a6e8-7266b2d33e26","Type":"ContainerDied","Data":"eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f"} Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.868657 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76m2" event={"ID":"a9778600-0f91-4d21-a6e8-7266b2d33e26","Type":"ContainerDied","Data":"5b86f000001ae5e13e6390caf266157e4ba49d797f0bd6dd166aa88baa9a0b87"} Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.868678 4606 scope.go:117] "RemoveContainer" containerID="eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.868694 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76m2" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.918235 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t76m2"] Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.927574 4606 scope.go:117] "RemoveContainer" containerID="63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75" Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.929281 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t76m2"] Jan 23 18:25:22 crc kubenswrapper[4606]: I0123 18:25:22.953792 4606 scope.go:117] "RemoveContainer" containerID="66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec" Jan 23 18:25:23 crc kubenswrapper[4606]: I0123 18:25:23.020237 4606 scope.go:117] "RemoveContainer" containerID="eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f" Jan 23 18:25:23 crc kubenswrapper[4606]: E0123 18:25:23.020636 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f\": container with ID starting with eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f not found: ID does not exist" containerID="eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f" Jan 23 18:25:23 crc kubenswrapper[4606]: I0123 18:25:23.020666 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f"} err="failed to get container status \"eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f\": rpc error: code = NotFound desc = could not find container \"eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f\": container with ID starting with eb6fd6fafab770b58c15504091e64c6cbdf11af5e3630a0ab78784219b59da3f not found: ID does not exist" Jan 23 18:25:23 crc kubenswrapper[4606]: I0123 18:25:23.020684 4606 scope.go:117] "RemoveContainer" containerID="63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75" Jan 23 18:25:23 crc kubenswrapper[4606]: E0123 18:25:23.021037 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75\": container with ID starting with 63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75 not found: ID does not exist" containerID="63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75" Jan 23 18:25:23 crc kubenswrapper[4606]: I0123 18:25:23.021058 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75"} err="failed to get container status \"63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75\": rpc error: code = NotFound desc = could not find container \"63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75\": container with ID starting with 63f6d1d951548ea9e3c2682963d3ab22e96133a854e4c24632d9e9283fa8ae75 not found: ID does not exist" Jan 23 18:25:23 crc kubenswrapper[4606]: I0123 18:25:23.021070 4606 scope.go:117] "RemoveContainer" containerID="66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec" Jan 23 18:25:23 crc kubenswrapper[4606]: E0123 18:25:23.021313 4606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec\": container with ID starting with 66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec not found: ID does not exist" containerID="66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec" Jan 23 18:25:23 crc kubenswrapper[4606]: I0123 18:25:23.021332 4606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec"} err="failed to get container status \"66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec\": rpc error: code = NotFound desc = could not find container \"66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec\": container with ID starting with 66b345f2c6d63b2802746d0e738ad73215eabe91189ed159b8a91f4d05fa60ec not found: ID does not exist" Jan 23 18:25:24 crc kubenswrapper[4606]: I0123 18:25:24.376542 4606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" path="/var/lib/kubelet/pods/a9778600-0f91-4d21-a6e8-7266b2d33e26/volumes" Jan 23 18:25:26 crc kubenswrapper[4606]: I0123 18:25:26.375085 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:25:26 crc kubenswrapper[4606]: E0123 18:25:26.375733 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:25:40 crc kubenswrapper[4606]: I0123 18:25:40.365807 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:25:40 crc kubenswrapper[4606]: E0123 18:25:40.367133 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:25:55 crc kubenswrapper[4606]: I0123 18:25:55.366750 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:25:55 crc kubenswrapper[4606]: E0123 18:25:55.367669 4606 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m9sb7_openshift-machine-config-operator(a8ea72e8-0104-491a-9ccc-49199db0d3d9)\"" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" Jan 23 18:26:10 crc kubenswrapper[4606]: I0123 18:26:10.365424 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:26:11 crc kubenswrapper[4606]: I0123 18:26:11.306081 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"ee1b22862f1fc787da5fcbf4844828892334c1c2a336af46b945d53bd4d936d6"} Jan 23 18:28:29 crc kubenswrapper[4606]: I0123 18:28:29.934338 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:28:29 crc kubenswrapper[4606]: I0123 18:28:29.935138 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:28:59 crc kubenswrapper[4606]: I0123 18:28:59.934244 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:28:59 crc kubenswrapper[4606]: I0123 18:28:59.934828 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:29:29 crc kubenswrapper[4606]: I0123 18:29:29.934561 4606 patch_prober.go:28] interesting pod/machine-config-daemon-m9sb7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:29:29 crc kubenswrapper[4606]: I0123 18:29:29.935208 4606 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:29:29 crc kubenswrapper[4606]: I0123 18:29:29.935261 4606 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" Jan 23 18:29:29 crc kubenswrapper[4606]: I0123 18:29:29.936162 4606 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee1b22862f1fc787da5fcbf4844828892334c1c2a336af46b945d53bd4d936d6"} pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:29:29 crc kubenswrapper[4606]: I0123 18:29:29.936222 4606 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" podUID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerName="machine-config-daemon" containerID="cri-o://ee1b22862f1fc787da5fcbf4844828892334c1c2a336af46b945d53bd4d936d6" gracePeriod=600 Jan 23 18:29:30 crc kubenswrapper[4606]: I0123 18:29:30.217015 4606 generic.go:334] "Generic (PLEG): container finished" podID="a8ea72e8-0104-491a-9ccc-49199db0d3d9" containerID="ee1b22862f1fc787da5fcbf4844828892334c1c2a336af46b945d53bd4d936d6" exitCode=0 Jan 23 18:29:30 crc kubenswrapper[4606]: I0123 18:29:30.217058 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerDied","Data":"ee1b22862f1fc787da5fcbf4844828892334c1c2a336af46b945d53bd4d936d6"} Jan 23 18:29:30 crc kubenswrapper[4606]: I0123 18:29:30.217092 4606 scope.go:117] "RemoveContainer" containerID="38860bb3d401e4b01ac5e7a7fc44cace1b9caec38e804cf212d320024f2fe97c" Jan 23 18:29:31 crc kubenswrapper[4606]: I0123 18:29:31.228640 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m9sb7" event={"ID":"a8ea72e8-0104-491a-9ccc-49199db0d3d9","Type":"ContainerStarted","Data":"f23eec3a25dff1656c43b58a23a624890924d13273765fef7ef8a2fd0a4ee32b"} Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.182557 4606 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz"] Jan 23 18:30:00 crc kubenswrapper[4606]: E0123 18:30:00.185020 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="registry-server" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.185144 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="registry-server" Jan 23 18:30:00 crc kubenswrapper[4606]: E0123 18:30:00.185268 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="extract-content" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.185353 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="extract-content" Jan 23 18:30:00 crc kubenswrapper[4606]: E0123 18:30:00.185444 4606 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="extract-utilities" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.185518 4606 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="extract-utilities" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.185849 4606 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9778600-0f91-4d21-a6e8-7266b2d33e26" containerName="registry-server" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.186860 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.189570 4606 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.190603 4606 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.200105 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz"] Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.333351 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-secret-volume\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.333740 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-config-volume\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.333778 4606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4lrk\" (UniqueName: \"kubernetes.io/projected/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-kube-api-access-x4lrk\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.435658 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-secret-volume\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.435713 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-config-volume\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.435759 4606 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4lrk\" (UniqueName: \"kubernetes.io/projected/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-kube-api-access-x4lrk\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.437225 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-config-volume\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.447578 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-secret-volume\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.456505 4606 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4lrk\" (UniqueName: \"kubernetes.io/projected/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-kube-api-access-x4lrk\") pod \"collect-profiles-29486550-9k5wz\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:00 crc kubenswrapper[4606]: I0123 18:30:00.517515 4606 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:01 crc kubenswrapper[4606]: I0123 18:30:01.000985 4606 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz"] Jan 23 18:30:01 crc kubenswrapper[4606]: I0123 18:30:01.499897 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" event={"ID":"d299cec0-d54e-4cb8-91d5-24fad2b6fd54","Type":"ContainerStarted","Data":"cd262a01c5a3a53fd003282e441b69b58940676dde6f08ba2ea806b5558a651a"} Jan 23 18:30:01 crc kubenswrapper[4606]: I0123 18:30:01.500378 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" event={"ID":"d299cec0-d54e-4cb8-91d5-24fad2b6fd54","Type":"ContainerStarted","Data":"b23dff3d86e329039a7c2e3a860c3950b2275d74a09eaa0a83ebc21c3cf72e1a"} Jan 23 18:30:01 crc kubenswrapper[4606]: I0123 18:30:01.559190 4606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" podStartSLOduration=1.559163855 podStartE2EDuration="1.559163855s" podCreationTimestamp="2026-01-23 18:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:01.526007712 +0000 UTC m=+5835.526200138" watchObservedRunningTime="2026-01-23 18:30:01.559163855 +0000 UTC m=+5835.559356261" Jan 23 18:30:02 crc kubenswrapper[4606]: I0123 18:30:02.511014 4606 generic.go:334] "Generic (PLEG): container finished" podID="d299cec0-d54e-4cb8-91d5-24fad2b6fd54" containerID="cd262a01c5a3a53fd003282e441b69b58940676dde6f08ba2ea806b5558a651a" exitCode=0 Jan 23 18:30:02 crc kubenswrapper[4606]: I0123 18:30:02.511127 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" event={"ID":"d299cec0-d54e-4cb8-91d5-24fad2b6fd54","Type":"ContainerDied","Data":"cd262a01c5a3a53fd003282e441b69b58940676dde6f08ba2ea806b5558a651a"} Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.842841 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.926026 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4lrk\" (UniqueName: \"kubernetes.io/projected/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-kube-api-access-x4lrk\") pod \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.926846 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-config-volume\") pod \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.927028 4606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-secret-volume\") pod \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\" (UID: \"d299cec0-d54e-4cb8-91d5-24fad2b6fd54\") " Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.927298 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-config-volume" (OuterVolumeSpecName: "config-volume") pod "d299cec0-d54e-4cb8-91d5-24fad2b6fd54" (UID: "d299cec0-d54e-4cb8-91d5-24fad2b6fd54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.927565 4606 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.936417 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d299cec0-d54e-4cb8-91d5-24fad2b6fd54" (UID: "d299cec0-d54e-4cb8-91d5-24fad2b6fd54"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:03 crc kubenswrapper[4606]: I0123 18:30:03.937758 4606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-kube-api-access-x4lrk" (OuterVolumeSpecName: "kube-api-access-x4lrk") pod "d299cec0-d54e-4cb8-91d5-24fad2b6fd54" (UID: "d299cec0-d54e-4cb8-91d5-24fad2b6fd54"). InnerVolumeSpecName "kube-api-access-x4lrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:04 crc kubenswrapper[4606]: I0123 18:30:04.030025 4606 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:04 crc kubenswrapper[4606]: I0123 18:30:04.030071 4606 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4lrk\" (UniqueName: \"kubernetes.io/projected/d299cec0-d54e-4cb8-91d5-24fad2b6fd54-kube-api-access-x4lrk\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:04 crc kubenswrapper[4606]: I0123 18:30:04.531507 4606 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" event={"ID":"d299cec0-d54e-4cb8-91d5-24fad2b6fd54","Type":"ContainerDied","Data":"b23dff3d86e329039a7c2e3a860c3950b2275d74a09eaa0a83ebc21c3cf72e1a"} Jan 23 18:30:04 crc kubenswrapper[4606]: I0123 18:30:04.531843 4606 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b23dff3d86e329039a7c2e3a860c3950b2275d74a09eaa0a83ebc21c3cf72e1a" Jan 23 18:30:04 crc kubenswrapper[4606]: I0123 18:30:04.531909 4606 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-9k5wz" Jan 23 18:30:04 crc kubenswrapper[4606]: I0123 18:30:04.930550 4606 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx"] Jan 23 18:30:04 crc kubenswrapper[4606]: I0123 18:30:04.939428 4606 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486505-zvlhx"] var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134737074024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134737075017376 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134723126016511 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134723127015462 5ustar corecore